I0429 13:02:32.840402 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0429 13:02:32.840633 7 e2e.go:129] Starting e2e run "d29d9444-0c6a-4445-a5ea-ffbefe9e2a77" on Ginkgo node 1 {"msg":"Test Suite starting","total":290,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588165351 - Will randomize all specs Will run 290 of 5093 specs Apr 29 13:02:32.893: INFO: >>> kubeConfig: /root/.kube/config Apr 29 13:02:32.898: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 13:02:32.923: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 13:02:32.986: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 13:02:32.986: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 29 13:02:32.986: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 13:02:33.001: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 29 13:02:33.001: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 13:02:33.001: INFO: e2e test version: v1.19.0-alpha.2.226+0c3c2cd6ac8c9f Apr 29 13:02:33.002: INFO: kube-apiserver version: v1.18.2 Apr 29 13:02:33.002: INFO: >>> kubeConfig: /root/.kube/config Apr 29 13:02:33.008: INFO: Cluster IP family: ipv4 SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 29 13:02:33.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Apr 29 13:02:33.097: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-b7c0da06-7a24-4fc6-8107-f9ac68842057 STEP: Creating a pod to test consume secrets Apr 29 13:02:33.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25" in namespace "projected-555" to be "Succeeded or Failed" Apr 29 13:02:33.144: INFO: Pod "pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25": Phase="Pending", Reason="", readiness=false. Elapsed: 21.354354ms Apr 29 13:02:35.262: INFO: Pod "pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139603702s Apr 29 13:02:37.266: INFO: Pod "pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14395022s STEP: Saw pod success Apr 29 13:02:37.266: INFO: Pod "pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25" satisfied condition "Succeeded or Failed" Apr 29 13:02:37.270: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25 container projected-secret-volume-test: STEP: delete the pod Apr 29 13:02:37.329: INFO: Waiting for pod pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25 to disappear Apr 29 13:02:37.341: INFO: Pod pod-projected-secrets-cf0a45d6-0e28-4bcd-bb0b-8b9f310f0c25 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 29 13:02:37.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-555" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 29 13:02:37.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c STEP: updating the pod Apr 29 13:02:44.118: INFO: Successfully updated pod "var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c" STEP: waiting for pod and container restart STEP: Failing liveness probe Apr 29 13:02:44.154: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-5127 PodName:var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 13:02:44.154: INFO: >>> kubeConfig: /root/.kube/config I0429 13:02:44.182718 7 log.go:172] (0xc002b97290) (0xc000bf26e0) Create stream I0429 13:02:44.182750 7 log.go:172] (0xc002b97290) (0xc000bf26e0) Stream added, broadcasting: 1 I0429 13:02:44.185879 7 log.go:172] (0xc002b97290) Reply frame received for 1 I0429 13:02:44.185925 7 log.go:172] (0xc002b97290) (0xc000bb4000) Create stream I0429 13:02:44.185944 7 log.go:172] (0xc002b97290) (0xc000bb4000) Stream added, broadcasting: 3 I0429 13:02:44.186971 7 log.go:172] (0xc002b97290) Reply frame received for 3 I0429 13:02:44.187027 7 log.go:172] (0xc002b97290) (0xc000e2a320) Create stream I0429 13:02:44.187052 7 log.go:172] (0xc002b97290) (0xc000e2a320) Stream added, broadcasting: 5 I0429 13:02:44.188258 7 log.go:172] (0xc002b97290) Reply frame received for 5 I0429 13:02:44.250892 7 log.go:172] (0xc002b97290) Data frame received for 3 I0429 13:02:44.250927 7 log.go:172] (0xc000bb4000) (3) Data frame handling I0429 13:02:44.250947 7 log.go:172] (0xc002b97290) Data frame received for 5 I0429 13:02:44.250956 7 log.go:172] (0xc000e2a320) (5) Data frame handling I0429 13:02:44.252606 7 log.go:172] (0xc002b97290) Data frame received for 1 I0429 13:02:44.252646 7 log.go:172] (0xc000bf26e0) (1) Data frame handling I0429 13:02:44.252677 7 log.go:172] (0xc000bf26e0) (1) Data frame sent I0429 13:02:44.252695 7 log.go:172] (0xc002b97290) (0xc000bf26e0) Stream removed, broadcasting: 1 I0429 13:02:44.252893 7 log.go:172] (0xc002b97290) Go away received I0429 13:02:44.253036 7 log.go:172] (0xc002b97290) (0xc000bf26e0) Stream removed, broadcasting: 1 I0429 13:02:44.253050 7 log.go:172] (0xc002b97290) (0xc000bb4000) Stream removed, broadcasting: 3 I0429 13:02:44.253056 7 log.go:172] (0xc002b97290) (0xc000e2a320) Stream removed, broadcasting: 5 Apr 29 13:02:44.253: INFO: Pod exec output: / STEP: Waiting for container to restart Apr 29 13:02:44.256: INFO: Container dapi-container, restarts: 0 Apr 29 13:02:54.262: INFO: Container dapi-container, restarts: 0 Apr 29 13:03:04.261: INFO: Container dapi-container, restarts: 0 Apr 29 13:03:14.261: INFO: Container dapi-container, restarts: 0 Apr 29 13:03:24.261: INFO: Container dapi-container, restarts: 1 Apr 29 13:03:24.261: INFO: Container has restart count: 1 STEP: Rewriting the file Apr 29 13:03:24.264: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-5127 PodName:var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 13:03:24.264: INFO: >>> kubeConfig: /root/.kube/config I0429 13:03:24.296205 7 log.go:172] (0xc0028de210) (0xc000bb4dc0) Create stream I0429 13:03:24.296247 7 log.go:172] (0xc0028de210) (0xc000bb4dc0) Stream added, broadcasting: 1 I0429 13:03:24.299410 7 log.go:172] (0xc0028de210) Reply frame received for 1 I0429 13:03:24.299439 7 log.go:172] (0xc0028de210) (0xc000e2b860) Create stream I0429 13:03:24.299446 7 log.go:172] (0xc0028de210) (0xc000e2b860) Stream added, broadcasting: 3 I0429 13:03:24.309428 7 log.go:172] (0xc0028de210) Reply frame received for 3 I0429 13:03:24.309479 7 log.go:172] (0xc0028de210) (0xc002c3b680) Create stream I0429 13:03:24.309489 7 log.go:172] (0xc0028de210) (0xc002c3b680) Stream added, broadcasting: 5 I0429 13:03:24.310696 7 log.go:172] (0xc0028de210) Reply frame received for 5 I0429 13:03:24.397355 7 log.go:172] (0xc0028de210) Data frame received for 5 I0429 13:03:24.397403 7 log.go:172] (0xc002c3b680) (5) Data frame handling I0429 13:03:24.397437 7 log.go:172] (0xc0028de210) Data frame received for 3 I0429 13:03:24.397455 7 log.go:172] (0xc000e2b860) (3) Data frame handling I0429 13:03:24.398716 7 log.go:172] (0xc0028de210) Data frame received for 1 I0429 13:03:24.398737 7 log.go:172] (0xc000bb4dc0) (1) Data frame handling I0429 13:03:24.398753 7 log.go:172] (0xc000bb4dc0) (1) Data frame sent I0429 13:03:24.398774 7 log.go:172] (0xc0028de210) (0xc000bb4dc0) Stream removed, broadcasting: 1 I0429 13:03:24.398834 7 log.go:172] (0xc0028de210) (0xc000bb4dc0) Stream removed, broadcasting: 1 I0429 13:03:24.398864 7 log.go:172] (0xc0028de210) Go away received I0429 13:03:24.398908 7 log.go:172] (0xc0028de210) (0xc000e2b860) Stream removed, broadcasting: 3 I0429 13:03:24.398934 7 log.go:172] (0xc0028de210) (0xc002c3b680) Stream removed, broadcasting: 5 Apr 29 13:03:24.398: INFO: Pod exec output: STEP: Waiting for container to stop restarting Apr 29 13:03:52.407: INFO: Container has restart count: 2 Apr 29 13:04:54.406: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Apr 29 13:04:54.410: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-5127 PodName:var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 13:04:54.410: INFO: >>> kubeConfig: /root/.kube/config I0429 13:04:54.446012 7 log.go:172] (0xc0028de160) (0xc000387a40) Create stream I0429 13:04:54.446051 7 log.go:172] (0xc0028de160) (0xc000387a40) Stream added, broadcasting: 1 I0429 13:04:54.447994 7 log.go:172] (0xc0028de160) Reply frame received for 1 I0429 13:04:54.448043 7 log.go:172] (0xc0028de160) (0xc0005c20a0) Create stream I0429 13:04:54.448073 7 log.go:172] (0xc0028de160) (0xc0005c20a0) Stream added, broadcasting: 3 I0429 13:04:54.448903 7 log.go:172] (0xc0028de160) Reply frame received for 3 I0429 13:04:54.448932 7 log.go:172] (0xc0028de160) (0xc000649720) Create stream I0429 13:04:54.448942 7 log.go:172] (0xc0028de160) (0xc000649720) Stream added, broadcasting: 5 I0429 13:04:54.450298 7 log.go:172] (0xc0028de160) Reply frame received for 5 I0429 13:04:54.528644 7 log.go:172] (0xc0028de160) Data frame received for 5 I0429 13:04:54.528675 7 log.go:172] (0xc000649720) (5) Data frame handling I0429 13:04:54.528696 7 log.go:172] (0xc0028de160) Data frame received for 3 I0429 13:04:54.528709 7 log.go:172] (0xc0005c20a0) (3) Data frame handling I0429 13:04:54.530202 7 log.go:172] (0xc0028de160) Data frame received for 1 I0429 13:04:54.530232 7 log.go:172] (0xc000387a40) (1) Data frame handling I0429 13:04:54.530262 7 log.go:172] (0xc000387a40) (1) Data frame sent I0429 13:04:54.530291 7 log.go:172] (0xc0028de160) (0xc000387a40) Stream removed, broadcasting: 1 I0429 13:04:54.530312 7 log.go:172] (0xc0028de160) Go away received I0429 13:04:54.530473 7 log.go:172] (0xc0028de160) (0xc000387a40) Stream removed, broadcasting: 1 I0429 13:04:54.530517 7 log.go:172] (0xc0028de160) (0xc0005c20a0) Stream removed, broadcasting: 3 I0429 13:04:54.530539 7 log.go:172] (0xc0028de160) (0xc000649720) Stream removed, broadcasting: 5 Apr 29 13:04:54.534: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-5127 PodName:var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 29 13:04:54.534: INFO: >>> kubeConfig: /root/.kube/config I0429 13:04:54.567803 7 log.go:172] (0xc002b96f20) (0xc000771040) Create stream I0429 13:04:54.567830 7 log.go:172] (0xc002b96f20) (0xc000771040) Stream added, broadcasting: 1 I0429 13:04:54.570078 7 log.go:172] (0xc002b96f20) Reply frame received for 1 I0429 13:04:54.570122 7 log.go:172] (0xc002b96f20) (0xc0001892c0) Create stream I0429 13:04:54.570137 7 log.go:172] (0xc002b96f20) (0xc0001892c0) Stream added, broadcasting: 3 I0429 13:04:54.570907 7 log.go:172] (0xc002b96f20) Reply frame received for 3 I0429 13:04:54.570933 7 log.go:172] (0xc002b96f20) (0xc0007712c0) Create stream I0429 13:04:54.570944 7 log.go:172] (0xc002b96f20) (0xc0007712c0) Stream added, broadcasting: 5 I0429 13:04:54.571645 7 log.go:172] (0xc002b96f20) Reply frame received for 5 I0429 13:04:54.626969 7 log.go:172] (0xc002b96f20) Data frame received for 3 I0429 13:04:54.626993 7 log.go:172] (0xc0001892c0) (3) Data frame handling I0429 13:04:54.627011 7 log.go:172] (0xc002b96f20) Data frame received for 5 I0429 13:04:54.627019 7 log.go:172] (0xc0007712c0) (5) Data frame handling I0429 13:04:54.628307 7 log.go:172] (0xc002b96f20) Data frame received for 1 I0429 13:04:54.628343 7 log.go:172] (0xc000771040) (1) Data frame handling I0429 13:04:54.628357 7 log.go:172] (0xc000771040) (1) Data frame sent I0429 13:04:54.628380 7 log.go:172] (0xc002b96f20) (0xc000771040) Stream removed, broadcasting: 1 I0429 13:04:54.628407 7 log.go:172] (0xc002b96f20) Go away received I0429 13:04:54.628483 7 log.go:172] (0xc002b96f20) (0xc000771040) Stream removed, broadcasting: 1 I0429 13:04:54.628503 7 log.go:172] (0xc002b96f20) (0xc0001892c0) Stream removed, broadcasting: 3 I0429 13:04:54.628509 7 log.go:172] (0xc002b96f20) (0xc0007712c0) Stream removed, broadcasting: 5 Apr 29 13:04:54.628: INFO: Deleting pod "var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c" in namespace "var-expansion-5127" Apr 29 13:04:54.634: INFO: Wait up to 5m0s for pod "var-expansion-db739d44-7cdc-44dc-95d8-cc56968c622c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 29 13:05:34.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5127" for this suite. • [SLOW TEST:177.288 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":290,"completed":2,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 29 13:05:34.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 29 13:05:34.753: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 29 13:05:45.512: INFO: >>> kubeConfig: /root/.kube/config Apr 29 13:05:48.468: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 29 13:05:59.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1623" for this suite. • [SLOW TEST:24.664 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":290,"completed":3,"skipped":76,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 29 13:05:59.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5439 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5439 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5439 Apr 29 13:05:59.445: INFO: Found 0 stateful pods, waiting for 1 Apr 29 13:06:09.450: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 29 13:06:09.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 13:06:12.516: INFO: stderr: "I0429 13:06:12.384797 33 log.go:172] (0xc00070e000) (0xc0006d2c80) Create stream\nI0429 13:06:12.384889 33 log.go:172] (0xc00070e000) (0xc0006d2c80) Stream added, broadcasting: 1\nI0429 13:06:12.388350 33 log.go:172] (0xc00070e000) Reply frame received for 1\nI0429 13:06:12.388410 33 log.go:172] (0xc00070e000) (0xc0006c2500) Create stream\nI0429 13:06:12.388434 33 log.go:172] (0xc00070e000) (0xc0006c2500) Stream added, broadcasting: 3\nI0429 13:06:12.389527 33 log.go:172] (0xc00070e000) Reply frame received for 3\nI0429 13:06:12.389566 33 log.go:172] (0xc00070e000) (0xc0006d3c20) Create stream\nI0429 13:06:12.389583 33 log.go:172] (0xc00070e000) (0xc0006d3c20) Stream added, broadcasting: 5\nI0429 13:06:12.390401 33 log.go:172] (0xc00070e000) Reply frame received for 5\nI0429 13:06:12.478568 33 log.go:172] (0xc00070e000) Data frame received for 5\nI0429 13:06:12.478599 33 log.go:172] (0xc0006d3c20) (5) Data frame handling\nI0429 13:06:12.478623 33 log.go:172] (0xc0006d3c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:06:12.510323 33 log.go:172] (0xc00070e000) Data frame received for 3\nI0429 13:06:12.510352 33 log.go:172] (0xc0006c2500) (3) Data frame handling\nI0429 13:06:12.510377 33 log.go:172] (0xc0006c2500) (3) Data frame sent\nI0429 13:06:12.510438 33 log.go:172] (0xc00070e000) Data frame received for 5\nI0429 13:06:12.510449 33 log.go:172] (0xc0006d3c20) (5) Data frame handling\nI0429 13:06:12.510634 33 log.go:172] (0xc00070e000) Data frame received for 3\nI0429 13:06:12.510650 33 log.go:172] (0xc0006c2500) (3) Data frame handling\nI0429 13:06:12.512500 33 log.go:172] (0xc00070e000) Data frame received for 1\nI0429 13:06:12.512515 33 log.go:172] (0xc0006d2c80) (1) Data frame handling\nI0429 13:06:12.512528 33 log.go:172] (0xc0006d2c80) (1) Data frame sent\nI0429 13:06:12.512542 33 log.go:172] (0xc00070e000) (0xc0006d2c80) Stream removed, broadcasting: 1\nI0429 13:06:12.512764 33 log.go:172] (0xc00070e000) Go away received\nI0429 13:06:12.512793 33 log.go:172] (0xc00070e000) (0xc0006d2c80) Stream removed, broadcasting: 1\nI0429 13:06:12.512864 33 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc0006c2500), 0x5:(*spdystream.Stream)(0xc0006d3c20)}\nI0429 13:06:12.512911 33 log.go:172] (0xc00070e000) (0xc0006c2500) Stream removed, broadcasting: 3\nI0429 13:06:12.512935 33 log.go:172] (0xc00070e000) (0xc0006d3c20) Stream removed, broadcasting: 5\n" Apr 29 13:06:12.516: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 13:06:12.516: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 13:06:12.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 29 13:06:22.736: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 13:06:22.737: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 13:06:22.777: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999407s Apr 29 13:06:23.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971036105s Apr 29 13:06:24.813: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966516147s Apr 29 13:06:25.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.934822767s Apr 29 13:06:26.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.930482245s Apr 29 13:06:27.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.925478928s Apr 29 13:06:28.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.920234652s Apr 29 13:06:29.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.914678352s Apr 29 13:06:30.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.909486275s Apr 29 13:06:31.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 904.565406ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5439 Apr 29 13:06:32.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 13:06:33.052: INFO: stderr: "I0429 13:06:32.983530 64 log.go:172] (0xc000a56840) (0xc000683b80) Create stream\nI0429 13:06:32.983585 64 log.go:172] (0xc000a56840) (0xc000683b80) Stream added, broadcasting: 1\nI0429 13:06:32.986507 64 log.go:172] (0xc000a56840) Reply frame received for 1\nI0429 13:06:32.986550 64 log.go:172] (0xc000a56840) (0xc0005401e0) Create stream\nI0429 13:06:32.986560 64 log.go:172] (0xc000a56840) (0xc0005401e0) Stream added, broadcasting: 3\nI0429 13:06:32.987683 64 log.go:172] (0xc000a56840) Reply frame received for 3\nI0429 13:06:32.987725 64 log.go:172] (0xc000a56840) (0xc000506d20) Create stream\nI0429 13:06:32.987757 64 log.go:172] (0xc000a56840) (0xc000506d20) Stream added, broadcasting: 5\nI0429 13:06:32.988590 64 log.go:172] (0xc000a56840) Reply frame received for 5\nI0429 13:06:33.046107 64 log.go:172] (0xc000a56840) Data frame received for 3\nI0429 13:06:33.046148 64 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0429 13:06:33.046161 64 log.go:172] (0xc0005401e0) (3) Data frame sent\nI0429 13:06:33.046172 64 log.go:172] (0xc000a56840) Data frame received for 3\nI0429 13:06:33.046182 64 log.go:172] (0xc0005401e0) (3) Data frame handling\nI0429 13:06:33.046212 64 log.go:172] (0xc000a56840) Data frame received for 5\nI0429 13:06:33.046222 64 log.go:172] (0xc000506d20) (5) Data frame handling\nI0429 13:06:33.046247 64 log.go:172] (0xc000506d20) (5) Data frame sent\nI0429 13:06:33.046262 64 log.go:172] (0xc000a56840) Data frame received for 5\nI0429 13:06:33.046272 64 log.go:172] (0xc000506d20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:06:33.047821 64 log.go:172] (0xc000a56840) Data frame received for 1\nI0429 13:06:33.047844 64 log.go:172] (0xc000683b80) (1) Data frame handling\nI0429 13:06:33.047855 64 log.go:172] (0xc000683b80) (1) Data frame sent\nI0429 13:06:33.047875 64 log.go:172] (0xc000a56840) (0xc000683b80) Stream removed, broadcasting: 1\nI0429 13:06:33.047929 64 log.go:172] (0xc000a56840) Go away received\nI0429 13:06:33.048191 64 log.go:172] (0xc000a56840) (0xc000683b80) Stream removed, broadcasting: 1\nI0429 13:06:33.048209 64 log.go:172] (0xc000a56840) (0xc0005401e0) Stream removed, broadcasting: 3\nI0429 13:06:33.048218 64 log.go:172] (0xc000a56840) (0xc000506d20) Stream removed, broadcasting: 5\n" Apr 29 13:06:33.052: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 13:06:33.052: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 13:06:33.059: INFO: Found 1 stateful pods, waiting for 3 Apr 29 13:06:43.066: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 29 13:06:43.066: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 29 13:06:43.066: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 29 13:06:43.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 13:06:43.301: INFO: stderr: "I0429 13:06:43.210411 88 log.go:172] (0xc00003a420) (0xc0006b0d20) Create stream\nI0429 13:06:43.210486 88 log.go:172] (0xc00003a420) (0xc0006b0d20) Stream added, broadcasting: 1\nI0429 13:06:43.213695 88 log.go:172] (0xc00003a420) Reply frame received for 1\nI0429 13:06:43.213774 88 log.go:172] (0xc00003a420) (0xc0006a8e60) Create stream\nI0429 13:06:43.213801 88 log.go:172] (0xc00003a420) (0xc0006a8e60) Stream added, broadcasting: 3\nI0429 13:06:43.214678 88 log.go:172] (0xc00003a420) Reply frame received for 3\nI0429 13:06:43.214732 88 log.go:172] (0xc00003a420) (0xc0005285a0) Create stream\nI0429 13:06:43.214747 88 log.go:172] (0xc00003a420) (0xc0005285a0) Stream added, broadcasting: 5\nI0429 13:06:43.215577 88 log.go:172] (0xc00003a420) Reply frame received for 5\nI0429 13:06:43.293600 88 log.go:172] (0xc00003a420) Data frame received for 3\nI0429 13:06:43.293655 88 log.go:172] (0xc0006a8e60) (3) Data frame handling\nI0429 13:06:43.293681 88 log.go:172] (0xc0006a8e60) (3) Data frame sent\nI0429 13:06:43.293702 88 log.go:172] (0xc00003a420) Data frame received for 3\nI0429 13:06:43.293718 88 log.go:172] (0xc0006a8e60) (3) Data frame handling\nI0429 13:06:43.293739 88 log.go:172] (0xc00003a420) Data frame received for 5\nI0429 13:06:43.293756 88 log.go:172] (0xc0005285a0) (5) Data frame handling\nI0429 13:06:43.293773 88 log.go:172] (0xc0005285a0) (5) Data frame sent\nI0429 13:06:43.293789 88 log.go:172] (0xc00003a420) Data frame received for 5\nI0429 13:06:43.293804 88 log.go:172] (0xc0005285a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:06:43.295657 88 log.go:172] (0xc00003a420) Data frame received for 1\nI0429 13:06:43.295686 88 log.go:172] (0xc0006b0d20) (1) Data frame handling\nI0429 13:06:43.295713 88 log.go:172] (0xc0006b0d20) (1) Data frame sent\nI0429 13:06:43.295747 88 log.go:172] (0xc00003a420) (0xc0006b0d20) Stream removed, broadcasting: 1\nI0429 13:06:43.295771 88 log.go:172] (0xc00003a420) Go away received\nI0429 13:06:43.296167 88 log.go:172] (0xc00003a420) (0xc0006b0d20) Stream removed, broadcasting: 1\nI0429 13:06:43.296192 88 log.go:172] (0xc00003a420) (0xc0006a8e60) Stream removed, broadcasting: 3\nI0429 13:06:43.296205 88 log.go:172] (0xc00003a420) (0xc0005285a0) Stream removed, broadcasting: 5\n" Apr 29 13:06:43.301: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 13:06:43.301: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 13:06:43.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 13:06:43.553: INFO: stderr: "I0429 13:06:43.444961 110 log.go:172] (0xc00091ef20) (0xc000a4c5a0) Create stream\nI0429 13:06:43.446008 110 log.go:172] (0xc00091ef20) (0xc000a4c5a0) Stream added, broadcasting: 1\nI0429 13:06:43.450350 110 log.go:172] (0xc00091ef20) Reply frame received for 1\nI0429 13:06:43.450408 110 log.go:172] (0xc00091ef20) (0xc000840dc0) Create stream\nI0429 13:06:43.450425 110 log.go:172] (0xc00091ef20) (0xc000840dc0) Stream added, broadcasting: 3\nI0429 13:06:43.451392 110 log.go:172] (0xc00091ef20) Reply frame received for 3\nI0429 13:06:43.451427 110 log.go:172] (0xc00091ef20) (0xc000824be0) Create stream\nI0429 13:06:43.451437 110 log.go:172] (0xc00091ef20) (0xc000824be0) Stream added, broadcasting: 5\nI0429 13:06:43.452398 110 log.go:172] (0xc00091ef20) Reply frame received for 5\nI0429 13:06:43.508107 110 log.go:172] (0xc00091ef20) Data frame received for 5\nI0429 13:06:43.508147 110 log.go:172] (0xc000824be0) (5) Data frame handling\nI0429 13:06:43.508176 110 log.go:172] (0xc000824be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:06:43.544703 110 log.go:172] (0xc00091ef20) Data frame received for 3\nI0429 13:06:43.544858 110 log.go:172] (0xc000840dc0) (3) Data frame handling\nI0429 13:06:43.544951 110 log.go:172] (0xc000840dc0) (3) Data frame sent\nI0429 13:06:43.545048 110 log.go:172] (0xc00091ef20) Data frame received for 3\nI0429 13:06:43.545375 110 log.go:172] (0xc000840dc0) (3) Data frame handling\nI0429 13:06:43.545673 110 log.go:172] (0xc00091ef20) Data frame received for 5\nI0429 13:06:43.545719 110 log.go:172] (0xc000824be0) (5) Data frame handling\nI0429 13:06:43.548060 110 log.go:172] (0xc00091ef20) Data frame received for 1\nI0429 13:06:43.548084 110 log.go:172] (0xc000a4c5a0) (1) Data frame handling\nI0429 13:06:43.548097 110 log.go:172] (0xc000a4c5a0) (1) Data frame sent\nI0429 13:06:43.548113 110 log.go:172] (0xc00091ef20) (0xc000a4c5a0) Stream removed, broadcasting: 1\nI0429 13:06:43.548371 110 log.go:172] (0xc00091ef20) Go away received\nI0429 13:06:43.548444 110 log.go:172] (0xc00091ef20) (0xc000a4c5a0) Stream removed, broadcasting: 1\nI0429 13:06:43.548458 110 log.go:172] (0xc00091ef20) (0xc000840dc0) Stream removed, broadcasting: 3\nI0429 13:06:43.548465 110 log.go:172] (0xc00091ef20) (0xc000824be0) Stream removed, broadcasting: 5\n" Apr 29 13:06:43.553: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 13:06:43.553: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 13:06:43.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 29 13:06:43.780: INFO: stderr: "I0429 13:06:43.680604 131 log.go:172] (0xc0006d42c0) (0xc0000ddae0) Create stream\nI0429 13:06:43.680656 131 log.go:172] (0xc0006d42c0) (0xc0000ddae0) Stream added, broadcasting: 1\nI0429 13:06:43.683515 131 log.go:172] (0xc0006d42c0) Reply frame received for 1\nI0429 13:06:43.683579 131 log.go:172] (0xc0006d42c0) (0xc000508140) Create stream\nI0429 13:06:43.683598 131 log.go:172] (0xc0006d42c0) (0xc000508140) Stream added, broadcasting: 3\nI0429 13:06:43.684733 131 log.go:172] (0xc0006d42c0) Reply frame received for 3\nI0429 13:06:43.684775 131 log.go:172] (0xc0006d42c0) (0xc000508640) Create stream\nI0429 13:06:43.684788 131 log.go:172] (0xc0006d42c0) (0xc000508640) Stream added, broadcasting: 5\nI0429 13:06:43.685745 131 log.go:172] (0xc0006d42c0) Reply frame received for 5\nI0429 13:06:43.739708 131 log.go:172] (0xc0006d42c0) Data frame received for 5\nI0429 13:06:43.739741 131 log.go:172] (0xc000508640) (5) Data frame handling\nI0429 13:06:43.739760 131 log.go:172] (0xc000508640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:06:43.772633 131 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0429 13:06:43.772676 131 log.go:172] (0xc000508140) (3) Data frame handling\nI0429 13:06:43.772698 131 log.go:172] (0xc000508140) (3) Data frame sent\nI0429 13:06:43.773004 131 log.go:172] (0xc0006d42c0) Data frame received for 5\nI0429 13:06:43.773033 131 log.go:172] (0xc000508640) (5) Data frame handling\nI0429 13:06:43.773072 131 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0429 13:06:43.773235 131 log.go:172] (0xc000508140) (3) Data frame handling\nI0429 13:06:43.775089 131 log.go:172] (0xc0006d42c0) Data frame received for 1\nI0429 13:06:43.775118 131 log.go:172] (0xc0000ddae0) (1) Data frame handling\nI0429 13:06:43.775140 131 log.go:172] (0xc0000ddae0) (1) Data frame sent\nI0429 13:06:43.775163 131 log.go:172] (0xc0006d42c0) (0xc0000ddae0) Stream removed, broadcasting: 1\nI0429 13:06:43.775374 131 log.go:172] (0xc0006d42c0) Go away received\nI0429 13:06:43.775709 131 log.go:172] (0xc0006d42c0) (0xc0000ddae0) Stream removed, broadcasting: 1\nI0429 13:06:43.775733 131 log.go:172] (0xc0006d42c0) (0xc000508140) Stream removed, broadcasting: 3\nI0429 13:06:43.775745 131 log.go:172] (0xc0006d42c0) (0xc000508640) Stream removed, broadcasting: 5\n" Apr 29 13:06:43.780: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 29 13:06:43.780: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 29 13:06:43.780: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 13:06:43.783: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 29 13:06:53.790: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 29 13:06:53.790: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 29 13:06:53.790: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 29 13:06:53.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999536s Apr 29 13:06:54.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996861763s Apr 29 13:06:55.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991617558s Apr 29 13:06:56.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984956079s Apr 29 13:06:57.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978963312s Apr 29 13:06:58.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97408002s Apr 29 13:06:59.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9684108s Apr 29 13:07:00.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962532733s Apr 29 13:07:01.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957615122s Apr 29 13:07:02.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 951.71609ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5439 Apr 29 13:07:03.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 13:07:04.109: INFO: stderr: "I0429 13:07:04.014645 152 log.go:172] (0xc000990000) (0xc000706aa0) Create stream\nI0429 13:07:04.014727 152 log.go:172] (0xc000990000) (0xc000706aa0) Stream added, broadcasting: 1\nI0429 13:07:04.016464 152 log.go:172] (0xc000990000) Reply frame received for 1\nI0429 13:07:04.016494 152 log.go:172] (0xc000990000) (0xc0009f4000) Create stream\nI0429 13:07:04.016502 152 log.go:172] (0xc000990000) (0xc0009f4000) Stream added, broadcasting: 3\nI0429 13:07:04.018059 152 log.go:172] (0xc000990000) Reply frame received for 3\nI0429 13:07:04.018175 152 log.go:172] (0xc000990000) (0xc00073e640) Create stream\nI0429 13:07:04.018188 152 log.go:172] (0xc000990000) (0xc00073e640) Stream added, broadcasting: 5\nI0429 13:07:04.019252 152 log.go:172] (0xc000990000) Reply frame received for 5\nI0429 13:07:04.102822 152 log.go:172] (0xc000990000) Data frame received for 3\nI0429 13:07:04.102877 152 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0429 13:07:04.102904 152 log.go:172] (0xc0009f4000) (3) Data frame sent\nI0429 13:07:04.102935 152 log.go:172] (0xc000990000) Data frame received for 5\nI0429 13:07:04.102982 152 log.go:172] (0xc00073e640) (5) Data frame handling\nI0429 13:07:04.102997 152 log.go:172] (0xc00073e640) (5) Data frame sent\nI0429 13:07:04.103010 152 log.go:172] (0xc000990000) Data frame received for 5\nI0429 13:07:04.103020 152 log.go:172] (0xc00073e640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:07:04.103044 152 log.go:172] (0xc000990000) Data frame received for 3\nI0429 13:07:04.103061 152 log.go:172] (0xc0009f4000) (3) Data frame handling\nI0429 13:07:04.104505 152 log.go:172] (0xc000990000) Data frame received for 1\nI0429 13:07:04.104540 152 log.go:172] (0xc000706aa0) (1) Data frame handling\nI0429 13:07:04.104560 152 log.go:172] (0xc000706aa0) (1) Data frame sent\nI0429 13:07:04.104591 152 log.go:172] (0xc000990000) (0xc000706aa0) Stream removed, broadcasting: 1\nI0429 13:07:04.104609 152 log.go:172] (0xc000990000) Go away received\nI0429 13:07:04.104998 152 log.go:172] (0xc000990000) (0xc000706aa0) Stream removed, broadcasting: 1\nI0429 13:07:04.105017 152 log.go:172] (0xc000990000) (0xc0009f4000) Stream removed, broadcasting: 3\nI0429 13:07:04.105026 152 log.go:172] (0xc000990000) (0xc00073e640) Stream removed, broadcasting: 5\n" Apr 29 13:07:04.109: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 13:07:04.109: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 13:07:04.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 13:07:04.301: INFO: stderr: "I0429 13:07:04.235693 172 log.go:172] (0xc000a911e0) (0xc000b141e0) Create stream\nI0429 13:07:04.235756 172 log.go:172] (0xc000a911e0) (0xc000b141e0) Stream added, broadcasting: 1\nI0429 13:07:04.239929 172 log.go:172] (0xc000a911e0) Reply frame received for 1\nI0429 13:07:04.239979 172 log.go:172] (0xc000a911e0) (0xc000b46000) Create stream\nI0429 13:07:04.239989 172 log.go:172] (0xc000a911e0) (0xc000b46000) Stream added, broadcasting: 3\nI0429 13:07:04.240869 172 log.go:172] (0xc000a911e0) Reply frame received for 3\nI0429 13:07:04.240912 172 log.go:172] (0xc000a911e0) (0xc00063c500) Create stream\nI0429 13:07:04.240924 172 log.go:172] (0xc000a911e0) (0xc00063c500) Stream added, broadcasting: 5\nI0429 13:07:04.241846 172 log.go:172] (0xc000a911e0) Reply frame received for 5\nI0429 13:07:04.294723 172 log.go:172] (0xc000a911e0) Data frame received for 5\nI0429 13:07:04.294759 172 log.go:172] (0xc00063c500) (5) Data frame handling\nI0429 13:07:04.294768 172 log.go:172] (0xc00063c500) (5) Data frame sent\nI0429 13:07:04.294774 172 log.go:172] (0xc000a911e0) Data frame received for 5\nI0429 13:07:04.294778 172 log.go:172] (0xc00063c500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:07:04.294792 172 log.go:172] (0xc000a911e0) Data frame received for 3\nI0429 13:07:04.294797 172 log.go:172] (0xc000b46000) (3) Data frame handling\nI0429 13:07:04.294802 172 log.go:172] (0xc000b46000) (3) Data frame sent\nI0429 13:07:04.294806 172 log.go:172] (0xc000a911e0) Data frame received for 3\nI0429 13:07:04.294811 172 log.go:172] (0xc000b46000) (3) Data frame handling\nI0429 13:07:04.296374 172 log.go:172] (0xc000a911e0) Data frame received for 1\nI0429 13:07:04.296401 172 log.go:172] (0xc000b141e0) (1) Data frame handling\nI0429 13:07:04.296420 172 log.go:172] (0xc000b141e0) (1) Data frame sent\nI0429 13:07:04.296433 172 log.go:172] (0xc000a911e0) (0xc000b141e0) Stream removed, broadcasting: 1\nI0429 13:07:04.296447 172 log.go:172] (0xc000a911e0) Go away received\nI0429 13:07:04.296902 172 log.go:172] (0xc000a911e0) (0xc000b141e0) Stream removed, broadcasting: 1\nI0429 13:07:04.296935 172 log.go:172] (0xc000a911e0) (0xc000b46000) Stream removed, broadcasting: 3\nI0429 13:07:04.296949 172 log.go:172] (0xc000a911e0) (0xc00063c500) Stream removed, broadcasting: 5\n" Apr 29 13:07:04.301: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 13:07:04.301: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 13:07:04.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5439 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 29 13:07:04.514: INFO: stderr: "I0429 13:07:04.442950 193 log.go:172] (0xc000a3d340) (0xc0009da3c0) Create stream\nI0429 13:07:04.443025 193 log.go:172] (0xc000a3d340) (0xc0009da3c0) Stream added, broadcasting: 1\nI0429 13:07:04.450985 193 log.go:172] (0xc000a3d340) Reply frame received for 1\nI0429 13:07:04.451035 193 log.go:172] (0xc000a3d340) (0xc00055c1e0) Create stream\nI0429 13:07:04.451047 193 log.go:172] (0xc000a3d340) (0xc00055c1e0) Stream added, broadcasting: 3\nI0429 13:07:04.454595 193 log.go:172] (0xc000a3d340) Reply frame received for 3\nI0429 13:07:04.454634 193 log.go:172] (0xc000a3d340) (0xc00055d180) Create stream\nI0429 13:07:04.454648 193 log.go:172] (0xc000a3d340) (0xc00055d180) Stream added, broadcasting: 5\nI0429 13:07:04.455401 193 log.go:172] (0xc000a3d340) Reply frame received for 5\nI0429 13:07:04.509000 193 log.go:172] (0xc000a3d340) Data frame received for 3\nI0429 13:07:04.509029 193 log.go:172] (0xc00055c1e0) (3) Data frame handling\nI0429 13:07:04.509040 193 log.go:172] (0xc00055c1e0) (3) Data frame sent\nI0429 13:07:04.509047 193 log.go:172] (0xc000a3d340) Data frame received for 3\nI0429 13:07:04.509053 193 log.go:172] (0xc00055c1e0) (3) Data frame handling\nI0429 13:07:04.509086 193 log.go:172] (0xc000a3d340) Data frame received for 5\nI0429 13:07:04.509095 193 log.go:172] (0xc00055d180) (5) Data frame handling\nI0429 13:07:04.509103 193 log.go:172] (0xc00055d180) (5) Data frame sent\nI0429 13:07:04.509254 193 log.go:172] (0xc000a3d340) Data frame received for 5\nI0429 13:07:04.509267 193 log.go:172] (0xc00055d180) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:07:04.510421 193 log.go:172] (0xc000a3d340) Data frame received for 1\nI0429 13:07:04.510457 193 log.go:172] (0xc0009da3c0) (1) Data frame handling\nI0429 13:07:04.510481 193 log.go:172] (0xc0009da3c0) (1) Data frame sent\nI0429 13:07:04.510510 193 log.go:172] (0xc000a3d340) (0xc0009da3c0) Stream removed, broadcasting: 1\nI0429 13:07:04.510538 193 log.go:172] (0xc000a3d340) Go away received\nI0429 13:07:04.510830 193 log.go:172] (0xc000a3d340) (0xc0009da3c0) Stream removed, broadcasting: 1\nI0429 13:07:04.510845 193 log.go:172] (0xc000a3d340) (0xc00055c1e0) Stream removed, broadcasting: 3\nI0429 13:07:04.510851 193 log.go:172] (0xc000a3d340) (0xc00055d180) Stream removed, broadcasting: 5\n" Apr 29 13:07:04.514: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 29 13:07:04.514: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 29 13:07:04.514: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Apr 29 13:07:34.577: INFO: Deleting all statefulset in ns statefulset-5439 Apr 29 13:07:34.580: INFO: Scaling statefulset ss to 0 Apr 29 13:07:34.589: INFO: Waiting for statefulset status.replicas updated to 0 Apr 29 13:07:34.591: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 29 13:07:34.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5439" for this suite. • [SLOW TEST:95.280 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":290,"completed":4,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 29 13:07:34.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 29 13:07:34.705: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating Agnhost RC
Apr 29 13:07:34.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4249'
Apr 29 13:07:35.205: INFO: stderr: ""
Apr 29 13:07:35.205: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Apr 29 13:07:36.210: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:07:36.210: INFO: Found 0 / 1
Apr 29 13:07:37.211: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:07:37.211: INFO: Found 0 / 1
Apr 29 13:07:38.210: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:07:38.210: INFO: Found 1 / 1
Apr 29 13:07:38.210: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Apr 29 13:07:38.213: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:07:38.213: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 29 13:07:38.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-d7gcb --namespace=kubectl-4249 -p {"metadata":{"annotations":{"x":"y"}}}'
Apr 29 13:07:38.324: INFO: stderr: ""
Apr 29 13:07:38.324: INFO: stdout: "pod/agnhost-master-d7gcb patched\n"
STEP: checking annotations
Apr 29 13:07:38.357: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:07:38.358: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:07:38.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4249" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":290,"completed":6,"skipped":126,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:07:38.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Apr 29 13:07:38.470: INFO: Waiting up to 5m0s for pod "pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea" in namespace "emptydir-3569" to be "Succeeded or Failed"
Apr 29 13:07:38.474: INFO: Pod "pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106522ms
Apr 29 13:07:40.564: INFO: Pod "pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093925749s
Apr 29 13:07:42.567: INFO: Pod "pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097572787s
STEP: Saw pod success
Apr 29 13:07:42.567: INFO: Pod "pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea" satisfied condition "Succeeded or Failed"
Apr 29 13:07:42.570: INFO: Trying to get logs from node kali-worker pod pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea container test-container: 
STEP: delete the pod
Apr 29 13:07:42.617: INFO: Waiting for pod pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea to disappear
Apr 29 13:07:42.624: INFO: Pod pod-4ff329f2-c96b-4a72-aa35-517e9bb83aea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:07:42.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3569" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":7,"skipped":140,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:07:42.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Apr 29 13:07:49.744: INFO: 9 pods remaining
Apr 29 13:07:49.744: INFO: 0 pods has nil DeletionTimestamp
Apr 29 13:07:49.744: INFO: 
Apr 29 13:07:51.216: INFO: 0 pods remaining
Apr 29 13:07:51.216: INFO: 0 pods has nil DeletionTimestamp
Apr 29 13:07:51.216: INFO: 
STEP: Gathering metrics
W0429 13:07:52.550663       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 13:07:52.550: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:07:52.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1031" for this suite.

• [SLOW TEST:10.108 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":290,"completed":8,"skipped":140,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:07:52.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating pod
Apr 29 13:07:58.676: INFO: Pod pod-hostip-03b3dfd0-9fd6-45bf-9bda-5f0e54ab1e0c has hostIP: 172.17.0.15
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:07:58.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3769" for this suite.

• [SLOW TEST:5.945 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":290,"completed":9,"skipped":146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:07:58.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-9142
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a new StatefulSet
Apr 29 13:07:58.803: INFO: Found 0 stateful pods, waiting for 3
Apr 29 13:08:08.807: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:08:08.807: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:08:08.807: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Apr 29 13:08:18.808: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:08:18.808: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:08:18.808: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:08:18.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9142 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:08:19.063: INFO: stderr: "I0429 13:08:18.943057     258 log.go:172] (0xc000b6ef20) (0xc000b20320) Create stream\nI0429 13:08:18.943109     258 log.go:172] (0xc000b6ef20) (0xc000b20320) Stream added, broadcasting: 1\nI0429 13:08:18.947337     258 log.go:172] (0xc000b6ef20) Reply frame received for 1\nI0429 13:08:18.947382     258 log.go:172] (0xc000b6ef20) (0xc00053a280) Create stream\nI0429 13:08:18.947405     258 log.go:172] (0xc000b6ef20) (0xc00053a280) Stream added, broadcasting: 3\nI0429 13:08:18.948375     258 log.go:172] (0xc000b6ef20) Reply frame received for 3\nI0429 13:08:18.948415     258 log.go:172] (0xc000b6ef20) (0xc000518dc0) Create stream\nI0429 13:08:18.948431     258 log.go:172] (0xc000b6ef20) (0xc000518dc0) Stream added, broadcasting: 5\nI0429 13:08:18.949476     258 log.go:172] (0xc000b6ef20) Reply frame received for 5\nI0429 13:08:19.025036     258 log.go:172] (0xc000b6ef20) Data frame received for 5\nI0429 13:08:19.025065     258 log.go:172] (0xc000518dc0) (5) Data frame handling\nI0429 13:08:19.025086     258 log.go:172] (0xc000518dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:08:19.055462     258 log.go:172] (0xc000b6ef20) Data frame received for 3\nI0429 13:08:19.055504     258 log.go:172] (0xc00053a280) (3) Data frame handling\nI0429 13:08:19.055526     258 log.go:172] (0xc00053a280) (3) Data frame sent\nI0429 13:08:19.055950     258 log.go:172] (0xc000b6ef20) Data frame received for 3\nI0429 13:08:19.055984     258 log.go:172] (0xc00053a280) (3) Data frame handling\nI0429 13:08:19.056107     258 log.go:172] (0xc000b6ef20) Data frame received for 5\nI0429 13:08:19.056134     258 log.go:172] (0xc000518dc0) (5) Data frame handling\nI0429 13:08:19.058237     258 log.go:172] (0xc000b6ef20) Data frame received for 1\nI0429 13:08:19.058262     258 log.go:172] (0xc000b20320) (1) Data frame handling\nI0429 13:08:19.058274     258 log.go:172] (0xc000b20320) (1) Data frame sent\nI0429 13:08:19.058289     258 log.go:172] (0xc000b6ef20) (0xc000b20320) Stream removed, broadcasting: 1\nI0429 13:08:19.058325     258 log.go:172] (0xc000b6ef20) Go away received\nI0429 13:08:19.058755     258 log.go:172] (0xc000b6ef20) (0xc000b20320) Stream removed, broadcasting: 1\nI0429 13:08:19.058776     258 log.go:172] (0xc000b6ef20) (0xc00053a280) Stream removed, broadcasting: 3\nI0429 13:08:19.058788     258 log.go:172] (0xc000b6ef20) (0xc000518dc0) Stream removed, broadcasting: 5\n"
Apr 29 13:08:19.063: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:08:19.063: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Apr 29 13:08:29.099: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Apr 29 13:08:39.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9142 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Apr 29 13:08:39.404: INFO: stderr: "I0429 13:08:39.270862     278 log.go:172] (0xc000aab550) (0xc000523d60) Create stream\nI0429 13:08:39.270922     278 log.go:172] (0xc000aab550) (0xc000523d60) Stream added, broadcasting: 1\nI0429 13:08:39.274813     278 log.go:172] (0xc000aab550) Reply frame received for 1\nI0429 13:08:39.274870     278 log.go:172] (0xc000aab550) (0xc00014f720) Create stream\nI0429 13:08:39.274889     278 log.go:172] (0xc000aab550) (0xc00014f720) Stream added, broadcasting: 3\nI0429 13:08:39.275900     278 log.go:172] (0xc000aab550) Reply frame received for 3\nI0429 13:08:39.275952     278 log.go:172] (0xc000aab550) (0xc0006ac500) Create stream\nI0429 13:08:39.275970     278 log.go:172] (0xc000aab550) (0xc0006ac500) Stream added, broadcasting: 5\nI0429 13:08:39.276865     278 log.go:172] (0xc000aab550) Reply frame received for 5\nI0429 13:08:39.357949     278 log.go:172] (0xc000aab550) Data frame received for 5\nI0429 13:08:39.357973     278 log.go:172] (0xc0006ac500) (5) Data frame handling\nI0429 13:08:39.357986     278 log.go:172] (0xc0006ac500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:08:39.395862     278 log.go:172] (0xc000aab550) Data frame received for 3\nI0429 13:08:39.395889     278 log.go:172] (0xc00014f720) (3) Data frame handling\nI0429 13:08:39.395917     278 log.go:172] (0xc00014f720) (3) Data frame sent\nI0429 13:08:39.396378     278 log.go:172] (0xc000aab550) Data frame received for 3\nI0429 13:08:39.396419     278 log.go:172] (0xc000aab550) Data frame received for 5\nI0429 13:08:39.396493     278 log.go:172] (0xc0006ac500) (5) Data frame handling\nI0429 13:08:39.396541     278 log.go:172] (0xc00014f720) (3) Data frame handling\nI0429 13:08:39.398562     278 log.go:172] (0xc000aab550) Data frame received for 1\nI0429 13:08:39.398575     278 log.go:172] (0xc000523d60) (1) Data frame handling\nI0429 13:08:39.398584     278 log.go:172] (0xc000523d60) (1) Data frame sent\nI0429 13:08:39.398593     278 log.go:172] (0xc000aab550) (0xc000523d60) Stream removed, broadcasting: 1\nI0429 13:08:39.398605     278 log.go:172] (0xc000aab550) Go away received\nI0429 13:08:39.399139     278 log.go:172] (0xc000aab550) (0xc000523d60) Stream removed, broadcasting: 1\nI0429 13:08:39.399177     278 log.go:172] (0xc000aab550) (0xc00014f720) Stream removed, broadcasting: 3\nI0429 13:08:39.399196     278 log.go:172] (0xc000aab550) (0xc0006ac500) Stream removed, broadcasting: 5\n"
Apr 29 13:08:39.404: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Apr 29 13:08:39.404: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Apr 29 13:08:49.460: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
Apr 29 13:08:49.460: INFO: Waiting for Pod statefulset-9142/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:08:49.460: INFO: Waiting for Pod statefulset-9142/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:08:49.460: INFO: Waiting for Pod statefulset-9142/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:08:59.468: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
Apr 29 13:08:59.468: INFO: Waiting for Pod statefulset-9142/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:08:59.468: INFO: Waiting for Pod statefulset-9142/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:09:09.468: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
STEP: Rolling back to a previous revision
Apr 29 13:09:19.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9142 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:09:19.748: INFO: stderr: "I0429 13:09:19.621450     299 log.go:172] (0xc000a73290) (0xc000c1a3c0) Create stream\nI0429 13:09:19.621525     299 log.go:172] (0xc000a73290) (0xc000c1a3c0) Stream added, broadcasting: 1\nI0429 13:09:19.628702     299 log.go:172] (0xc000a73290) Reply frame received for 1\nI0429 13:09:19.628762     299 log.go:172] (0xc000a73290) (0xc0006e2780) Create stream\nI0429 13:09:19.628783     299 log.go:172] (0xc000a73290) (0xc0006e2780) Stream added, broadcasting: 3\nI0429 13:09:19.630373     299 log.go:172] (0xc000a73290) Reply frame received for 3\nI0429 13:09:19.630413     299 log.go:172] (0xc000a73290) (0xc0006e30e0) Create stream\nI0429 13:09:19.630431     299 log.go:172] (0xc000a73290) (0xc0006e30e0) Stream added, broadcasting: 5\nI0429 13:09:19.631339     299 log.go:172] (0xc000a73290) Reply frame received for 5\nI0429 13:09:19.696920     299 log.go:172] (0xc000a73290) Data frame received for 5\nI0429 13:09:19.696952     299 log.go:172] (0xc0006e30e0) (5) Data frame handling\nI0429 13:09:19.696969     299 log.go:172] (0xc0006e30e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:09:19.741027     299 log.go:172] (0xc000a73290) Data frame received for 5\nI0429 13:09:19.741096     299 log.go:172] (0xc0006e30e0) (5) Data frame handling\nI0429 13:09:19.741312     299 log.go:172] (0xc000a73290) Data frame received for 3\nI0429 13:09:19.741335     299 log.go:172] (0xc0006e2780) (3) Data frame handling\nI0429 13:09:19.741351     299 log.go:172] (0xc0006e2780) (3) Data frame sent\nI0429 13:09:19.741455     299 log.go:172] (0xc000a73290) Data frame received for 3\nI0429 13:09:19.741487     299 log.go:172] (0xc0006e2780) (3) Data frame handling\nI0429 13:09:19.742905     299 log.go:172] (0xc000a73290) Data frame received for 1\nI0429 13:09:19.742920     299 log.go:172] (0xc000c1a3c0) (1) Data frame handling\nI0429 13:09:19.742926     299 log.go:172] (0xc000c1a3c0) (1) Data frame sent\nI0429 13:09:19.742934     299 log.go:172] (0xc000a73290) (0xc000c1a3c0) Stream removed, broadcasting: 1\nI0429 13:09:19.742993     299 log.go:172] (0xc000a73290) Go away received\nI0429 13:09:19.743158     299 log.go:172] (0xc000a73290) (0xc000c1a3c0) Stream removed, broadcasting: 1\nI0429 13:09:19.743171     299 log.go:172] (0xc000a73290) (0xc0006e2780) Stream removed, broadcasting: 3\nI0429 13:09:19.743178     299 log.go:172] (0xc000a73290) (0xc0006e30e0) Stream removed, broadcasting: 5\n"
Apr 29 13:09:19.748: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:09:19.748: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Apr 29 13:09:29.790: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Apr 29 13:09:39.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9142 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Apr 29 13:09:40.096: INFO: stderr: "I0429 13:09:40.010397     319 log.go:172] (0xc000840370) (0xc0003de3c0) Create stream\nI0429 13:09:40.010461     319 log.go:172] (0xc000840370) (0xc0003de3c0) Stream added, broadcasting: 1\nI0429 13:09:40.012294     319 log.go:172] (0xc000840370) Reply frame received for 1\nI0429 13:09:40.012339     319 log.go:172] (0xc000840370) (0xc00023c780) Create stream\nI0429 13:09:40.012354     319 log.go:172] (0xc000840370) (0xc00023c780) Stream added, broadcasting: 3\nI0429 13:09:40.013550     319 log.go:172] (0xc000840370) Reply frame received for 3\nI0429 13:09:40.013589     319 log.go:172] (0xc000840370) (0xc00023d2c0) Create stream\nI0429 13:09:40.013603     319 log.go:172] (0xc000840370) (0xc00023d2c0) Stream added, broadcasting: 5\nI0429 13:09:40.014642     319 log.go:172] (0xc000840370) Reply frame received for 5\nI0429 13:09:40.090768     319 log.go:172] (0xc000840370) Data frame received for 3\nI0429 13:09:40.090809     319 log.go:172] (0xc00023c780) (3) Data frame handling\nI0429 13:09:40.090836     319 log.go:172] (0xc00023c780) (3) Data frame sent\nI0429 13:09:40.090888     319 log.go:172] (0xc000840370) Data frame received for 3\nI0429 13:09:40.090924     319 log.go:172] (0xc00023c780) (3) Data frame handling\nI0429 13:09:40.090967     319 log.go:172] (0xc000840370) Data frame received for 5\nI0429 13:09:40.090988     319 log.go:172] (0xc00023d2c0) (5) Data frame handling\nI0429 13:09:40.091006     319 log.go:172] (0xc00023d2c0) (5) Data frame sent\nI0429 13:09:40.091019     319 log.go:172] (0xc000840370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:09:40.091027     319 log.go:172] (0xc00023d2c0) (5) Data frame handling\nI0429 13:09:40.092501     319 log.go:172] (0xc000840370) Data frame received for 1\nI0429 13:09:40.092515     319 log.go:172] (0xc0003de3c0) (1) Data frame handling\nI0429 13:09:40.092526     319 log.go:172] (0xc0003de3c0) (1) Data frame sent\nI0429 13:09:40.092537     319 log.go:172] (0xc000840370) (0xc0003de3c0) Stream removed, broadcasting: 1\nI0429 13:09:40.092675     319 log.go:172] (0xc000840370) Go away received\nI0429 13:09:40.092810     319 log.go:172] (0xc000840370) (0xc0003de3c0) Stream removed, broadcasting: 1\nI0429 13:09:40.092821     319 log.go:172] (0xc000840370) (0xc00023c780) Stream removed, broadcasting: 3\nI0429 13:09:40.092828     319 log.go:172] (0xc000840370) (0xc00023d2c0) Stream removed, broadcasting: 5\n"
Apr 29 13:09:40.096: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Apr 29 13:09:40.096: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Apr 29 13:09:50.118: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
Apr 29 13:09:50.118: INFO: Waiting for Pod statefulset-9142/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Apr 29 13:09:50.118: INFO: Waiting for Pod statefulset-9142/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Apr 29 13:09:50.118: INFO: Waiting for Pod statefulset-9142/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Apr 29 13:10:00.182: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
Apr 29 13:10:00.182: INFO: Waiting for Pod statefulset-9142/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Apr 29 13:10:10.127: INFO: Waiting for StatefulSet statefulset-9142/ss2 to complete update
Apr 29 13:10:10.127: INFO: Waiting for Pod statefulset-9142/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Apr 29 13:10:20.125: INFO: Deleting all statefulset in ns statefulset-9142
Apr 29 13:10:20.130: INFO: Scaling statefulset ss2 to 0
Apr 29 13:10:40.149: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:10:40.152: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:10:40.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9142" for this suite.

• [SLOW TEST:161.608 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":290,"completed":10,"skipped":193,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:10:40.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service nodeport-test with type=NodePort in namespace services-8757
STEP: creating replication controller nodeport-test in namespace services-8757
I0429 13:10:40.880796       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8757, replica count: 2
I0429 13:10:43.931271       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:10:46.931577       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:10:46.931: INFO: Creating new exec pod
Apr 29 13:10:51.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8757 execpodm7b5v -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Apr 29 13:10:52.204: INFO: stderr: "I0429 13:10:52.124706     340 log.go:172] (0xc000b74fd0) (0xc000b7a460) Create stream\nI0429 13:10:52.124754     340 log.go:172] (0xc000b74fd0) (0xc000b7a460) Stream added, broadcasting: 1\nI0429 13:10:52.129360     340 log.go:172] (0xc000b74fd0) Reply frame received for 1\nI0429 13:10:52.129429     340 log.go:172] (0xc000b74fd0) (0xc00015fae0) Create stream\nI0429 13:10:52.129447     340 log.go:172] (0xc000b74fd0) (0xc00015fae0) Stream added, broadcasting: 3\nI0429 13:10:52.130935     340 log.go:172] (0xc000b74fd0) Reply frame received for 3\nI0429 13:10:52.130984     340 log.go:172] (0xc000b74fd0) (0xc0006ba640) Create stream\nI0429 13:10:52.131006     340 log.go:172] (0xc000b74fd0) (0xc0006ba640) Stream added, broadcasting: 5\nI0429 13:10:52.132001     340 log.go:172] (0xc000b74fd0) Reply frame received for 5\nI0429 13:10:52.198250     340 log.go:172] (0xc000b74fd0) Data frame received for 5\nI0429 13:10:52.198293     340 log.go:172] (0xc0006ba640) (5) Data frame handling\nI0429 13:10:52.198323     340 log.go:172] (0xc0006ba640) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0429 13:10:52.198675     340 log.go:172] (0xc000b74fd0) Data frame received for 5\nI0429 13:10:52.198698     340 log.go:172] (0xc0006ba640) (5) Data frame handling\nI0429 13:10:52.198725     340 log.go:172] (0xc0006ba640) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0429 13:10:52.198758     340 log.go:172] (0xc000b74fd0) Data frame received for 3\nI0429 13:10:52.198779     340 log.go:172] (0xc00015fae0) (3) Data frame handling\nI0429 13:10:52.198922     340 log.go:172] (0xc000b74fd0) Data frame received for 5\nI0429 13:10:52.198938     340 log.go:172] (0xc0006ba640) (5) Data frame handling\nI0429 13:10:52.200326     340 log.go:172] (0xc000b74fd0) Data frame received for 1\nI0429 13:10:52.200339     340 log.go:172] (0xc000b7a460) (1) Data frame handling\nI0429 13:10:52.200346     340 log.go:172] (0xc000b7a460) (1) Data frame sent\nI0429 13:10:52.200354     340 log.go:172] (0xc000b74fd0) (0xc000b7a460) Stream removed, broadcasting: 1\nI0429 13:10:52.200402     340 log.go:172] (0xc000b74fd0) Go away received\nI0429 13:10:52.200584     340 log.go:172] (0xc000b74fd0) (0xc000b7a460) Stream removed, broadcasting: 1\nI0429 13:10:52.200598     340 log.go:172] (0xc000b74fd0) (0xc00015fae0) Stream removed, broadcasting: 3\nI0429 13:10:52.200605     340 log.go:172] (0xc000b74fd0) (0xc0006ba640) Stream removed, broadcasting: 5\n"
Apr 29 13:10:52.204: INFO: stdout: ""
Apr 29 13:10:52.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8757 execpodm7b5v -- /bin/sh -x -c nc -zv -t -w 2 10.96.64.250 80'
Apr 29 13:10:52.401: INFO: stderr: "I0429 13:10:52.335611     362 log.go:172] (0xc00003bb80) (0xc000a9e5a0) Create stream\nI0429 13:10:52.335680     362 log.go:172] (0xc00003bb80) (0xc000a9e5a0) Stream added, broadcasting: 1\nI0429 13:10:52.343605     362 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0429 13:10:52.343644     362 log.go:172] (0xc00003bb80) (0xc000854a00) Create stream\nI0429 13:10:52.343666     362 log.go:172] (0xc00003bb80) (0xc000854a00) Stream added, broadcasting: 3\nI0429 13:10:52.344543     362 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0429 13:10:52.344569     362 log.go:172] (0xc00003bb80) (0xc00056ad20) Create stream\nI0429 13:10:52.344577     362 log.go:172] (0xc00003bb80) (0xc00056ad20) Stream added, broadcasting: 5\nI0429 13:10:52.345486     362 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0429 13:10:52.393810     362 log.go:172] (0xc00003bb80) Data frame received for 5\nI0429 13:10:52.393842     362 log.go:172] (0xc00056ad20) (5) Data frame handling\nI0429 13:10:52.393862     362 log.go:172] (0xc00056ad20) (5) Data frame sent\nI0429 13:10:52.393871     362 log.go:172] (0xc00003bb80) Data frame received for 5\nI0429 13:10:52.393877     362 log.go:172] (0xc00056ad20) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.64.250 80\nConnection to 10.96.64.250 80 port [tcp/http] succeeded!\nI0429 13:10:52.393936     362 log.go:172] (0xc00056ad20) (5) Data frame sent\nI0429 13:10:52.394114     362 log.go:172] (0xc00003bb80) Data frame received for 5\nI0429 13:10:52.394135     362 log.go:172] (0xc00056ad20) (5) Data frame handling\nI0429 13:10:52.394355     362 log.go:172] (0xc00003bb80) Data frame received for 3\nI0429 13:10:52.394372     362 log.go:172] (0xc000854a00) (3) Data frame handling\nI0429 13:10:52.395738     362 log.go:172] (0xc00003bb80) Data frame received for 1\nI0429 13:10:52.395755     362 log.go:172] (0xc000a9e5a0) (1) Data frame handling\nI0429 13:10:52.395764     362 log.go:172] (0xc000a9e5a0) (1) Data frame sent\nI0429 13:10:52.395912     362 log.go:172] (0xc00003bb80) (0xc000a9e5a0) Stream removed, broadcasting: 1\nI0429 13:10:52.396220     362 log.go:172] (0xc00003bb80) (0xc000a9e5a0) Stream removed, broadcasting: 1\nI0429 13:10:52.396243     362 log.go:172] (0xc00003bb80) (0xc000854a00) Stream removed, broadcasting: 3\nI0429 13:10:52.396253     362 log.go:172] (0xc00003bb80) (0xc00056ad20) Stream removed, broadcasting: 5\n"
Apr 29 13:10:52.401: INFO: stdout: ""
Apr 29 13:10:52.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8757 execpodm7b5v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31859'
Apr 29 13:10:52.595: INFO: stderr: "I0429 13:10:52.528521     384 log.go:172] (0xc00094a210) (0xc000502dc0) Create stream\nI0429 13:10:52.528573     384 log.go:172] (0xc00094a210) (0xc000502dc0) Stream added, broadcasting: 1\nI0429 13:10:52.534653     384 log.go:172] (0xc00094a210) Reply frame received for 1\nI0429 13:10:52.534723     384 log.go:172] (0xc00094a210) (0xc000688000) Create stream\nI0429 13:10:52.534739     384 log.go:172] (0xc00094a210) (0xc000688000) Stream added, broadcasting: 3\nI0429 13:10:52.535959     384 log.go:172] (0xc00094a210) Reply frame received for 3\nI0429 13:10:52.536013     384 log.go:172] (0xc00094a210) (0xc000688780) Create stream\nI0429 13:10:52.536033     384 log.go:172] (0xc00094a210) (0xc000688780) Stream added, broadcasting: 5\nI0429 13:10:52.536928     384 log.go:172] (0xc00094a210) Reply frame received for 5\nI0429 13:10:52.588665     384 log.go:172] (0xc00094a210) Data frame received for 3\nI0429 13:10:52.588697     384 log.go:172] (0xc000688000) (3) Data frame handling\nI0429 13:10:52.588715     384 log.go:172] (0xc00094a210) Data frame received for 5\nI0429 13:10:52.588724     384 log.go:172] (0xc000688780) (5) Data frame handling\nI0429 13:10:52.588734     384 log.go:172] (0xc000688780) (5) Data frame sent\nI0429 13:10:52.588745     384 log.go:172] (0xc00094a210) Data frame received for 5\nI0429 13:10:52.588754     384 log.go:172] (0xc000688780) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31859\nConnection to 172.17.0.15 31859 port [tcp/31859] succeeded!\nI0429 13:10:52.590416     384 log.go:172] (0xc00094a210) Data frame received for 1\nI0429 13:10:52.590431     384 log.go:172] (0xc000502dc0) (1) Data frame handling\nI0429 13:10:52.590437     384 log.go:172] (0xc000502dc0) (1) Data frame sent\nI0429 13:10:52.590446     384 log.go:172] (0xc00094a210) (0xc000502dc0) Stream removed, broadcasting: 1\nI0429 13:10:52.590457     384 log.go:172] (0xc00094a210) Go away received\nI0429 13:10:52.590943     384 log.go:172] (0xc00094a210) (0xc000502dc0) Stream removed, broadcasting: 1\nI0429 13:10:52.590969     384 log.go:172] (0xc00094a210) (0xc000688000) Stream removed, broadcasting: 3\nI0429 13:10:52.590982     384 log.go:172] (0xc00094a210) (0xc000688780) Stream removed, broadcasting: 5\n"
Apr 29 13:10:52.595: INFO: stdout: ""
Apr 29 13:10:52.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8757 execpodm7b5v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31859'
Apr 29 13:10:52.817: INFO: stderr: "I0429 13:10:52.738394     406 log.go:172] (0xc000b26f20) (0xc000b48500) Create stream\nI0429 13:10:52.738458     406 log.go:172] (0xc000b26f20) (0xc000b48500) Stream added, broadcasting: 1\nI0429 13:10:52.742982     406 log.go:172] (0xc000b26f20) Reply frame received for 1\nI0429 13:10:52.743020     406 log.go:172] (0xc000b26f20) (0xc000710e60) Create stream\nI0429 13:10:52.743031     406 log.go:172] (0xc000b26f20) (0xc000710e60) Stream added, broadcasting: 3\nI0429 13:10:52.743880     406 log.go:172] (0xc000b26f20) Reply frame received for 3\nI0429 13:10:52.743925     406 log.go:172] (0xc000b26f20) (0xc0005901e0) Create stream\nI0429 13:10:52.743934     406 log.go:172] (0xc000b26f20) (0xc0005901e0) Stream added, broadcasting: 5\nI0429 13:10:52.744609     406 log.go:172] (0xc000b26f20) Reply frame received for 5\nI0429 13:10:52.812161     406 log.go:172] (0xc000b26f20) Data frame received for 5\nI0429 13:10:52.812203     406 log.go:172] (0xc0005901e0) (5) Data frame handling\nI0429 13:10:52.812218     406 log.go:172] (0xc0005901e0) (5) Data frame sent\nI0429 13:10:52.812228     406 log.go:172] (0xc000b26f20) Data frame received for 5\nI0429 13:10:52.812235     406 log.go:172] (0xc0005901e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31859\nConnection to 172.17.0.18 31859 port [tcp/31859] succeeded!\nI0429 13:10:52.812257     406 log.go:172] (0xc000b26f20) Data frame received for 3\nI0429 13:10:52.812265     406 log.go:172] (0xc000710e60) (3) Data frame handling\nI0429 13:10:52.813583     406 log.go:172] (0xc000b26f20) Data frame received for 1\nI0429 13:10:52.813599     406 log.go:172] (0xc000b48500) (1) Data frame handling\nI0429 13:10:52.813607     406 log.go:172] (0xc000b48500) (1) Data frame sent\nI0429 13:10:52.813616     406 log.go:172] (0xc000b26f20) (0xc000b48500) Stream removed, broadcasting: 1\nI0429 13:10:52.813634     406 log.go:172] (0xc000b26f20) Go away received\nI0429 13:10:52.813898     406 log.go:172] (0xc000b26f20) (0xc000b48500) Stream removed, broadcasting: 1\nI0429 13:10:52.813912     406 log.go:172] (0xc000b26f20) (0xc000710e60) Stream removed, broadcasting: 3\nI0429 13:10:52.813919     406 log.go:172] (0xc000b26f20) (0xc0005901e0) Stream removed, broadcasting: 5\n"
Apr 29 13:10:52.817: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:10:52.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8757" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:12.533 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":290,"completed":11,"skipped":196,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:10:52.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 13:10:59.090: INFO: DNS probes using dns-test-246e0482-361f-4854-ad00-8102e8f41b36 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 13:11:05.279: INFO: File wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:05.283: INFO: File jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:05.283: INFO: Lookups using dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f failed for: [wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local]

Apr 29 13:11:10.293: INFO: File jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:10.293: INFO: Lookups using dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f failed for: [jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local]

Apr 29 13:11:15.289: INFO: File wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:15.293: INFO: File jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:15.293: INFO: Lookups using dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f failed for: [wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local]

Apr 29 13:11:20.336: INFO: File jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:20.336: INFO: Lookups using dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f failed for: [jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local]

Apr 29 13:11:25.288: INFO: File wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local from pod  dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 13:11:25.291: INFO: Lookups using dns-4888/dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f failed for: [wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local]

Apr 29 13:11:30.292: INFO: DNS probes using dns-test-1e8fd4f4-09b8-4567-8571-1516a996f53f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4888.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4888.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 13:11:36.710: INFO: DNS probes using dns-test-7e52614b-ef47-4a87-a0fa-9d523809b591 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:11:36.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4888" for this suite.

• [SLOW TEST:43.974 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":290,"completed":12,"skipped":227,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:11:36.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-2839
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 13:11:37.352: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 13:11:37.482: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:11:39.486: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:11:41.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:11:43.486: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:45.485: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:47.485: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:49.486: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:51.496: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:53.486: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:55.486: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:11:57.486: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 13:11:57.491: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 13:11:59.496: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 13:12:01.511: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 13:12:05.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.25:8080/dial?request=hostname&protocol=http&host=10.244.2.24&port=8080&tries=1'] Namespace:pod-network-test-2839 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:12:05.554: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:12:05.589746       7 log.go:172] (0xc0019b80b0) (0xc000649360) Create stream
I0429 13:12:05.589789       7 log.go:172] (0xc0019b80b0) (0xc000649360) Stream added, broadcasting: 1
I0429 13:12:05.591884       7 log.go:172] (0xc0019b80b0) Reply frame received for 1
I0429 13:12:05.591936       7 log.go:172] (0xc0019b80b0) (0xc000fbc820) Create stream
I0429 13:12:05.591957       7 log.go:172] (0xc0019b80b0) (0xc000fbc820) Stream added, broadcasting: 3
I0429 13:12:05.592741       7 log.go:172] (0xc0019b80b0) Reply frame received for 3
I0429 13:12:05.592803       7 log.go:172] (0xc0019b80b0) (0xc000fbcbe0) Create stream
I0429 13:12:05.592828       7 log.go:172] (0xc0019b80b0) (0xc000fbcbe0) Stream added, broadcasting: 5
I0429 13:12:05.593908       7 log.go:172] (0xc0019b80b0) Reply frame received for 5
I0429 13:12:05.681041       7 log.go:172] (0xc0019b80b0) Data frame received for 3
I0429 13:12:05.681078       7 log.go:172] (0xc000fbc820) (3) Data frame handling
I0429 13:12:05.681101       7 log.go:172] (0xc000fbc820) (3) Data frame sent
I0429 13:12:05.681748       7 log.go:172] (0xc0019b80b0) Data frame received for 3
I0429 13:12:05.681770       7 log.go:172] (0xc000fbc820) (3) Data frame handling
I0429 13:12:05.681808       7 log.go:172] (0xc0019b80b0) Data frame received for 5
I0429 13:12:05.681828       7 log.go:172] (0xc000fbcbe0) (5) Data frame handling
I0429 13:12:05.683483       7 log.go:172] (0xc0019b80b0) Data frame received for 1
I0429 13:12:05.683511       7 log.go:172] (0xc000649360) (1) Data frame handling
I0429 13:12:05.683531       7 log.go:172] (0xc000649360) (1) Data frame sent
I0429 13:12:05.683551       7 log.go:172] (0xc0019b80b0) (0xc000649360) Stream removed, broadcasting: 1
I0429 13:12:05.683567       7 log.go:172] (0xc0019b80b0) Go away received
I0429 13:12:05.683654       7 log.go:172] (0xc0019b80b0) (0xc000649360) Stream removed, broadcasting: 1
I0429 13:12:05.683671       7 log.go:172] (0xc0019b80b0) (0xc000fbc820) Stream removed, broadcasting: 3
I0429 13:12:05.683683       7 log.go:172] (0xc0019b80b0) (0xc000fbcbe0) Stream removed, broadcasting: 5
Apr 29 13:12:05.683: INFO: Waiting for responses: map[]
Apr 29 13:12:05.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.25:8080/dial?request=hostname&protocol=http&host=10.244.1.27&port=8080&tries=1'] Namespace:pod-network-test-2839 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:12:05.686: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:12:05.718576       7 log.go:172] (0xc0019b8840) (0xc000d90140) Create stream
I0429 13:12:05.718606       7 log.go:172] (0xc0019b8840) (0xc000d90140) Stream added, broadcasting: 1
I0429 13:12:05.720579       7 log.go:172] (0xc0019b8840) Reply frame received for 1
I0429 13:12:05.720613       7 log.go:172] (0xc0019b8840) (0xc000bb4000) Create stream
I0429 13:12:05.720624       7 log.go:172] (0xc0019b8840) (0xc000bb4000) Stream added, broadcasting: 3
I0429 13:12:05.721677       7 log.go:172] (0xc0019b8840) Reply frame received for 3
I0429 13:12:05.721723       7 log.go:172] (0xc0019b8840) (0xc000d90280) Create stream
I0429 13:12:05.721733       7 log.go:172] (0xc0019b8840) (0xc000d90280) Stream added, broadcasting: 5
I0429 13:12:05.722540       7 log.go:172] (0xc0019b8840) Reply frame received for 5
I0429 13:12:05.789481       7 log.go:172] (0xc0019b8840) Data frame received for 3
I0429 13:12:05.789530       7 log.go:172] (0xc000bb4000) (3) Data frame handling
I0429 13:12:05.789563       7 log.go:172] (0xc000bb4000) (3) Data frame sent
I0429 13:12:05.790097       7 log.go:172] (0xc0019b8840) Data frame received for 5
I0429 13:12:05.790120       7 log.go:172] (0xc000d90280) (5) Data frame handling
I0429 13:12:05.790156       7 log.go:172] (0xc0019b8840) Data frame received for 3
I0429 13:12:05.790189       7 log.go:172] (0xc000bb4000) (3) Data frame handling
I0429 13:12:05.791640       7 log.go:172] (0xc0019b8840) Data frame received for 1
I0429 13:12:05.791659       7 log.go:172] (0xc000d90140) (1) Data frame handling
I0429 13:12:05.791685       7 log.go:172] (0xc000d90140) (1) Data frame sent
I0429 13:12:05.791703       7 log.go:172] (0xc0019b8840) (0xc000d90140) Stream removed, broadcasting: 1
I0429 13:12:05.791738       7 log.go:172] (0xc0019b8840) Go away received
I0429 13:12:05.791807       7 log.go:172] (0xc0019b8840) (0xc000d90140) Stream removed, broadcasting: 1
I0429 13:12:05.791818       7 log.go:172] (0xc0019b8840) (0xc000bb4000) Stream removed, broadcasting: 3
I0429 13:12:05.791849       7 log.go:172] (0xc0019b8840) (0xc000d90280) Stream removed, broadcasting: 5
Apr 29 13:12:05.791: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:05.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2839" for this suite.

• [SLOW TEST:29.001 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":290,"completed":13,"skipped":231,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:05.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:05.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-150" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":290,"completed":14,"skipped":243,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:05.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Apr 29 13:12:05.970: INFO: Waiting up to 5m0s for pod "var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab" in namespace "var-expansion-8704" to be "Succeeded or Failed"
Apr 29 13:12:05.973: INFO: Pod "var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599391ms
Apr 29 13:12:07.977: INFO: Pod "var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007542282s
Apr 29 13:12:09.982: INFO: Pod "var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011952444s
STEP: Saw pod success
Apr 29 13:12:09.982: INFO: Pod "var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab" satisfied condition "Succeeded or Failed"
Apr 29 13:12:09.985: INFO: Trying to get logs from node kali-worker2 pod var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab container dapi-container: 
STEP: delete the pod
Apr 29 13:12:10.032: INFO: Waiting for pod var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab to disappear
Apr 29 13:12:10.039: INFO: Pod var-expansion-2848acd1-3a5d-4d27-b73c-7baa4d8828ab no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:10.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8704" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":290,"completed":15,"skipped":270,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:10.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:12:10.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e" in namespace "downward-api-2925" to be "Succeeded or Failed"
Apr 29 13:12:10.147: INFO: Pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.870371ms
Apr 29 13:12:12.260: INFO: Pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122876348s
Apr 29 13:12:14.264: INFO: Pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.126938479s
Apr 29 13:12:16.269: INFO: Pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131715856s
STEP: Saw pod success
Apr 29 13:12:16.269: INFO: Pod "downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e" satisfied condition "Succeeded or Failed"
Apr 29 13:12:16.272: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e container client-container: 
STEP: delete the pod
Apr 29 13:12:16.320: INFO: Waiting for pod downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e to disappear
Apr 29 13:12:16.327: INFO: Pod downwardapi-volume-6a6e92a7-435e-4f51-ae69-94e3ed5c1b5e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:16.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2925" for this suite.

• [SLOW TEST:6.311 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":290,"completed":16,"skipped":272,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:16.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:12:16.396: INFO: Creating ReplicaSet my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536
Apr 29 13:12:16.447: INFO: Pod name my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536: Found 0 pods out of 1
Apr 29 13:12:21.470: INFO: Pod name my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536: Found 1 pods out of 1
Apr 29 13:12:21.470: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536" is running
Apr 29 13:12:21.473: INFO: Pod "my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536-kw9ll" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:12:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:12:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:12:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:12:16 +0000 UTC Reason: Message:}])
Apr 29 13:12:21.474: INFO: Trying to dial the pod
Apr 29 13:12:26.484: INFO: Controller my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536: Got expected result from replica 1 [my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536-kw9ll]: "my-hostname-basic-6a68e37b-cf3e-4e54-b74a-ed19904de536-kw9ll", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:26.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1740" for this suite.

• [SLOW TEST:10.133 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":290,"completed":17,"skipped":276,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:26.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-7pw4
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 13:12:27.566: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7pw4" in namespace "subpath-3019" to be "Succeeded or Failed"
Apr 29 13:12:27.600: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.892344ms
Apr 29 13:12:29.604: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037734593s
Apr 29 13:12:31.609: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 4.042979064s
Apr 29 13:12:33.620: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 6.053430475s
Apr 29 13:12:35.624: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 8.057859183s
Apr 29 13:12:37.628: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 10.062293166s
Apr 29 13:12:39.633: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 12.066456166s
Apr 29 13:12:41.636: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 14.070102054s
Apr 29 13:12:43.661: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 16.095332176s
Apr 29 13:12:45.698: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 18.131680727s
Apr 29 13:12:47.702: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 20.135687231s
Apr 29 13:12:49.729: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 22.16319514s
Apr 29 13:12:51.744: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Running", Reason="", readiness=true. Elapsed: 24.178333957s
Apr 29 13:12:53.749: INFO: Pod "pod-subpath-test-downwardapi-7pw4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.18242198s
STEP: Saw pod success
Apr 29 13:12:53.749: INFO: Pod "pod-subpath-test-downwardapi-7pw4" satisfied condition "Succeeded or Failed"
Apr 29 13:12:53.752: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-7pw4 container test-container-subpath-downwardapi-7pw4: 
STEP: delete the pod
Apr 29 13:12:53.854: INFO: Waiting for pod pod-subpath-test-downwardapi-7pw4 to disappear
Apr 29 13:12:53.857: INFO: Pod pod-subpath-test-downwardapi-7pw4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-7pw4
Apr 29 13:12:53.857: INFO: Deleting pod "pod-subpath-test-downwardapi-7pw4" in namespace "subpath-3019"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:12:53.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3019" for this suite.

• [SLOW TEST:27.376 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":290,"completed":18,"skipped":283,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:12:53.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
Apr 29 13:12:53.921: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 13:12:53.940: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 13:12:53.943: INFO: 
Logging pods the apiserver thinks is on node kali-worker before test
Apr 29 13:12:53.947: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:12:53.947: INFO: 	Container kindnet-cni ready: true, restart count 1
Apr 29 13:12:53.947: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:12:53.947: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 13:12:53.947: INFO: 
Logging pods the apiserver thinks is on node kali-worker2 before test
Apr 29 13:12:53.951: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:12:53.951: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 13:12:53.951: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:12:53.951: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-64dc00fa-239e-46f6-8a46-5c694ca0f0e9 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-64dc00fa-239e-46f6-8a46-5c694ca0f0e9 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-64dc00fa-239e-46f6-8a46-5c694ca0f0e9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:18:02.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-729" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81

• [SLOW TEST:308.377 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":290,"completed":19,"skipped":318,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:18:02.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:18:02.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5439" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":290,"completed":20,"skipped":325,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:18:02.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zwwhh in namespace proxy-8235
I0429 13:18:02.481089       7 runners.go:190] Created replication controller with name: proxy-service-zwwhh, namespace: proxy-8235, replica count: 1
I0429 13:18:03.531640       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:18:04.531811       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:18:05.532082       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:18:06.532356       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:07.532548       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:08.532747       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:09.533005       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:10.533399       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:11.533639       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:12.533849       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:13.534068       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 13:18:14.534291       7 runners.go:190] proxy-service-zwwhh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:18:14.538: INFO: setup took 12.138501658s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 8.736242ms)
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 8.95326ms)
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 8.848753ms)
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 8.747669ms)
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 8.790291ms)
Apr 29 13:18:14.547: INFO: (0) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 9.222646ms)
Apr 29 13:18:14.549: INFO: (0) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 10.694499ms)
Apr 29 13:18:14.551: INFO: (0) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 12.472366ms)
Apr 29 13:18:14.551: INFO: (0) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 12.532998ms)
Apr 29 13:18:14.551: INFO: (0) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 12.557969ms)
Apr 29 13:18:14.551: INFO: (0) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 12.574104ms)
Apr 29 13:18:14.558: INFO: (0) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 19.690875ms)
Apr 29 13:18:14.558: INFO: (0) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 19.560415ms)
Apr 29 13:18:14.558: INFO: (0) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 19.838992ms)
Apr 29 13:18:14.558: INFO: (0) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 5.49001ms)
Apr 29 13:18:14.564: INFO: (1) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 5.660655ms)
Apr 29 13:18:14.564: INFO: (1) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 6.026276ms)
Apr 29 13:18:14.564: INFO: (1) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 6.062777ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 6.346133ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 6.814271ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.9141ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 6.884145ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 6.852653ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 6.950425ms)
Apr 29 13:18:14.565: INFO: (1) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.965478ms)
Apr 29 13:18:14.568: INFO: (1) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 9.86777ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 5.594642ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 5.619853ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 5.685608ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 5.683361ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.95587ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 5.817813ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.024764ms)
Apr 29 13:18:14.574: INFO: (2) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 6.366989ms)
Apr 29 13:18:14.575: INFO: (2) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.454141ms)
Apr 29 13:18:14.578: INFO: (3) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.121968ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.874671ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.927156ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 3.379713ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.825346ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 4.631486ms)
Apr 29 13:18:14.580: INFO: (3) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.808651ms)
Apr 29 13:18:14.581: INFO: (3) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 5.678459ms)
Apr 29 13:18:14.581: INFO: (3) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 6.180773ms)
Apr 29 13:18:14.581: INFO: (3) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 5.645365ms)
Apr 29 13:18:14.581: INFO: (3) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 5.536012ms)
Apr 29 13:18:14.582: INFO: (3) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 5.352027ms)
Apr 29 13:18:14.582: INFO: (3) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 5.595248ms)
Apr 29 13:18:14.582: INFO: (3) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 5.627195ms)
Apr 29 13:18:14.588: INFO: (4) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 5.60458ms)
Apr 29 13:18:14.588: INFO: (4) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 5.587456ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.408639ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.460006ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.501707ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.499107ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.497969ms)
Apr 29 13:18:14.592: INFO: (5) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.512093ms)
Apr 29 13:18:14.593: INFO: (5) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 4.679464ms)
Apr 29 13:18:14.593: INFO: (5) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 4.343103ms)
Apr 29 13:18:14.599: INFO: (6) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.590052ms)
Apr 29 13:18:14.599: INFO: (6) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.650794ms)
Apr 29 13:18:14.599: INFO: (6) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.758ms)
Apr 29 13:18:14.599: INFO: (6) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 4.918173ms)
Apr 29 13:18:14.600: INFO: (6) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 6.033885ms)
Apr 29 13:18:14.600: INFO: (6) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 6.088615ms)
Apr 29 13:18:14.600: INFO: (6) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 6.149852ms)
Apr 29 13:18:14.603: INFO: (7) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 2.629374ms)
Apr 29 13:18:14.603: INFO: (7) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 2.607495ms)
Apr 29 13:18:14.605: INFO: (7) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.976334ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 5.284606ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 5.315494ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 5.35542ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 5.517462ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 5.751728ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.793818ms)
Apr 29 13:18:14.606: INFO: (7) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 4.686008ms)
Apr 29 13:18:14.617: INFO: (8) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 5.858864ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 6.013707ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 6.343974ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 6.195786ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 5.810192ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.964942ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 6.135901ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 6.325614ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 6.227136ms)
Apr 29 13:18:14.618: INFO: (8) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.193683ms)
Apr 29 13:18:14.619: INFO: (8) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 6.724785ms)
Apr 29 13:18:14.621: INFO: (9) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 2.329932ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.578342ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.61721ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.647184ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.623433ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.669113ms)
Apr 29 13:18:14.623: INFO: (9) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 4.701753ms)
Apr 29 13:18:14.624: INFO: (9) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 4.188089ms)
Apr 29 13:18:14.629: INFO: (10) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.191537ms)
Apr 29 13:18:14.629: INFO: (10) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.25175ms)
Apr 29 13:18:14.629: INFO: (10) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 4.29593ms)
Apr 29 13:18:14.629: INFO: (10) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 4.447618ms)
Apr 29 13:18:14.629: INFO: (10) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 5.319738ms)
Apr 29 13:18:14.635: INFO: (11) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.191521ms)
Apr 29 13:18:14.635: INFO: (11) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 5.344517ms)
Apr 29 13:18:14.635: INFO: (11) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.251104ms)
Apr 29 13:18:14.636: INFO: (11) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 6.701117ms)
Apr 29 13:18:14.637: INFO: (11) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 6.941625ms)
Apr 29 13:18:14.637: INFO: (11) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 7.028692ms)
Apr 29 13:18:14.637: INFO: (11) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 6.981573ms)
Apr 29 13:18:14.641: INFO: (12) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.778244ms)
Apr 29 13:18:14.641: INFO: (12) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 4.078142ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.195141ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 4.217256ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 4.324186ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 4.349355ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 4.419069ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 4.509309ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 4.368608ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.588713ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.508813ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.638125ms)
Apr 29 13:18:14.642: INFO: (12) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 4.855506ms)
Apr 29 13:18:14.646: INFO: (13) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 3.235179ms)
Apr 29 13:18:14.646: INFO: (13) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 3.361333ms)
Apr 29 13:18:14.646: INFO: (13) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.376287ms)
Apr 29 13:18:14.646: INFO: (13) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 3.961146ms)
Apr 29 13:18:14.646: INFO: (13) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.905851ms)
Apr 29 13:18:14.647: INFO: (13) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.387271ms)
Apr 29 13:18:14.647: INFO: (13) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.944949ms)
Apr 29 13:18:14.647: INFO: (13) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 4.928461ms)
Apr 29 13:18:14.647: INFO: (13) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 5.123494ms)
Apr 29 13:18:14.648: INFO: (13) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 5.169835ms)
Apr 29 13:18:14.648: INFO: (13) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 5.131704ms)
Apr 29 13:18:14.648: INFO: (13) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 5.344555ms)
Apr 29 13:18:14.648: INFO: (13) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.404621ms)
Apr 29 13:18:14.648: INFO: (13) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 3.011663ms)
Apr 29 13:18:14.651: INFO: (14) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.292182ms)
Apr 29 13:18:14.651: INFO: (14) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.276032ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 3.610186ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 3.876233ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.928527ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 4.316314ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 4.460559ms)
Apr 29 13:18:14.652: INFO: (14) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.472264ms)
Apr 29 13:18:14.653: INFO: (14) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 4.722041ms)
Apr 29 13:18:14.653: INFO: (14) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 4.661364ms)
Apr 29 13:18:14.653: INFO: (14) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 4.704921ms)
Apr 29 13:18:14.653: INFO: (14) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 4.862942ms)
Apr 29 13:18:14.653: INFO: (14) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 4.710915ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 4.822628ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 5.014172ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 5.026425ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.127877ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 5.379469ms)
Apr 29 13:18:14.658: INFO: (15) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 5.238838ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 5.639726ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 5.674992ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: ... (200; 5.652474ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 5.646961ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.637643ms)
Apr 29 13:18:14.659: INFO: (15) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 5.652864ms)
Apr 29 13:18:14.662: INFO: (16) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.450156ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 3.744788ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.913936ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 3.861321ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.940182ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 3.990016ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.986443ms)
Apr 29 13:18:14.663: INFO: (16) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 3.996878ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 4.824919ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 5.092479ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 5.161784ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.253486ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 5.178984ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 5.210459ms)
Apr 29 13:18:14.664: INFO: (16) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 5.205178ms)
Apr 29 13:18:14.667: INFO: (17) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.019475ms)
Apr 29 13:18:14.668: INFO: (17) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 3.296689ms)
Apr 29 13:18:14.668: INFO: (17) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.266306ms)
Apr 29 13:18:14.668: INFO: (17) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 3.303841ms)
Apr 29 13:18:14.668: INFO: (17) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 3.265099ms)
Apr 29 13:18:14.668: INFO: (17) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 3.354983ms)
Apr 29 13:18:14.669: INFO: (17) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 4.776647ms)
Apr 29 13:18:14.669: INFO: (17) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 4.792796ms)
Apr 29 13:18:14.669: INFO: (17) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 4.88703ms)
Apr 29 13:18:14.669: INFO: (17) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 4.897535ms)
Apr 29 13:18:14.669: INFO: (17) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 5.0689ms)
Apr 29 13:18:14.670: INFO: (17) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 5.67372ms)
Apr 29 13:18:14.670: INFO: (17) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 5.783973ms)
Apr 29 13:18:14.670: INFO: (17) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.911789ms)
Apr 29 13:18:14.670: INFO: (17) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test<... (200; 6.668084ms)
Apr 29 13:18:14.679: INFO: (18) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 8.690279ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 9.255052ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 9.310207ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 9.660854ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname2/proxy/: bar (200; 9.715236ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname2/proxy/: tls qux (200; 9.77126ms)
Apr 29 13:18:14.680: INFO: (18) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: test (200; 9.819316ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 10.021373ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 10.078236ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 10.243651ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 10.1676ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 10.46413ms)
Apr 29 13:18:14.681: INFO: (18) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 10.595498ms)
Apr 29 13:18:14.685: INFO: (19) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 3.474063ms)
Apr 29 13:18:14.685: INFO: (19) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:460/proxy/: tls baz (200; 3.599332ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:160/proxy/: foo (200; 4.117933ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/services/https:proxy-service-zwwhh:tlsportname1/proxy/: tls baz (200; 4.620718ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname1/proxy/: foo (200; 4.679151ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 4.917012ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/services/proxy-service-zwwhh:portname2/proxy/: bar (200; 4.948422ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/services/http:proxy-service-zwwhh:portname1/proxy/: foo (200; 4.92149ms)
Apr 29 13:18:14.686: INFO: (19) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp/proxy/: test (200; 5.111451ms)
Apr 29 13:18:14.687: INFO: (19) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:1080/proxy/: ... (200; 5.145283ms)
Apr 29 13:18:14.687: INFO: (19) /api/v1/namespaces/proxy-8235/pods/http:proxy-service-zwwhh-swjkp:162/proxy/: bar (200; 5.452711ms)
Apr 29 13:18:14.687: INFO: (19) /api/v1/namespaces/proxy-8235/pods/proxy-service-zwwhh-swjkp:1080/proxy/: test<... (200; 5.398976ms)
Apr 29 13:18:14.687: INFO: (19) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:462/proxy/: tls qux (200; 5.494414ms)
Apr 29 13:18:14.687: INFO: (19) /api/v1/namespaces/proxy-8235/pods/https:proxy-service-zwwhh-swjkp:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod test-webserver-f4a0e960-37c1-4dc1-846e-91e5d9b85095 in namespace container-probe-6483
Apr 29 13:18:27.571: INFO: Started pod test-webserver-f4a0e960-37c1-4dc1-846e-91e5d9b85095 in namespace container-probe-6483
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 13:18:27.575: INFO: Initial restart count of pod test-webserver-f4a0e960-37c1-4dc1-846e-91e5d9b85095 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:22:29.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6483" for this suite.

• [SLOW TEST:246.076 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":290,"completed":22,"skipped":333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:22:29.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Apr 29 13:22:33.871: INFO: &Pod{ObjectMeta:{send-events-f6e98411-f96f-4abc-bd1b-c14e725f2812  events-7180 /api/v1/namespaces/events-7180/pods/send-events-f6e98411-f96f-4abc-bd1b-c14e725f2812 f53606da-cee5-4079-95e6-b2f9fb262e14 60245 0 2020-04-29 13:22:29 +0000 UTC   map[name:foo time:725812234] map[] [] []  [{e2e.test Update v1 2020-04-29 13:22:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 13:22:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.32\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-849v8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-849v8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-849v8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:22:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:22:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:22:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:22:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.32,StartTime:2020-04-29 13:22:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 13:22:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://f3778334677a31048479cf63b6e2721e562eb07b05380c66756d404c3d28f67f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Apr 29 13:22:35.883: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Apr 29 13:22:37.890: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:22:37.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7180" for this suite.

• [SLOW TEST:8.469 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":290,"completed":23,"skipped":362,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:22:38.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 29 13:22:38.061: INFO: Waiting up to 5m0s for pod "pod-9318cd9a-b73a-45b9-9754-92cfa95a855c" in namespace "emptydir-5953" to be "Succeeded or Failed"
Apr 29 13:22:38.066: INFO: Pod "pod-9318cd9a-b73a-45b9-9754-92cfa95a855c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193317ms
Apr 29 13:22:40.070: INFO: Pod "pod-9318cd9a-b73a-45b9-9754-92cfa95a855c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008904595s
Apr 29 13:22:42.075: INFO: Pod "pod-9318cd9a-b73a-45b9-9754-92cfa95a855c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013679183s
STEP: Saw pod success
Apr 29 13:22:42.075: INFO: Pod "pod-9318cd9a-b73a-45b9-9754-92cfa95a855c" satisfied condition "Succeeded or Failed"
Apr 29 13:22:42.078: INFO: Trying to get logs from node kali-worker2 pod pod-9318cd9a-b73a-45b9-9754-92cfa95a855c container test-container: 
STEP: delete the pod
Apr 29 13:22:42.178: INFO: Waiting for pod pod-9318cd9a-b73a-45b9-9754-92cfa95a855c to disappear
Apr 29 13:22:42.200: INFO: Pod pod-9318cd9a-b73a-45b9-9754-92cfa95a855c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:22:42.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5953" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":24,"skipped":370,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:22:42.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:22:42.821: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:22:49.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3177" for this suite.

• [SLOW TEST:6.905 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":290,"completed":25,"skipped":398,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:22:49.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-8637
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a new StatefulSet
Apr 29 13:22:49.209: INFO: Found 0 stateful pods, waiting for 3
Apr 29 13:22:59.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:22:59.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:22:59.214: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Apr 29 13:23:09.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:23:09.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:23:09.214: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Apr 29 13:23:09.240: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Apr 29 13:23:19.334: INFO: Updating stateful set ss2
Apr 29 13:23:19.385: INFO: Waiting for Pod statefulset-8637/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Apr 29 13:23:29.790: INFO: Found 2 stateful pods, waiting for 3
Apr 29 13:23:39.795: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:23:39.795: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:23:39.795: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Apr 29 13:23:39.821: INFO: Updating stateful set ss2
Apr 29 13:23:39.865: INFO: Waiting for Pod statefulset-8637/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:23:49.873: INFO: Waiting for Pod statefulset-8637/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Apr 29 13:23:59.893: INFO: Updating stateful set ss2
Apr 29 13:23:59.900: INFO: Waiting for StatefulSet statefulset-8637/ss2 to complete update
Apr 29 13:23:59.900: INFO: Waiting for Pod statefulset-8637/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Apr 29 13:24:09.909: INFO: Deleting all statefulset in ns statefulset-8637
Apr 29 13:24:09.912: INFO: Scaling statefulset ss2 to 0
Apr 29 13:24:40.110: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:24:40.113: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:24:40.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8637" for this suite.

• [SLOW TEST:111.024 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":290,"completed":26,"skipped":415,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:24:40.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Apr 29 13:24:40.228: INFO: Waiting up to 5m0s for pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb" in namespace "emptydir-1791" to be "Succeeded or Failed"
Apr 29 13:24:40.275: INFO: Pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.272822ms
Apr 29 13:24:42.279: INFO: Pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05047758s
Apr 29 13:24:44.283: INFO: Pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb": Phase="Running", Reason="", readiness=true. Elapsed: 4.054951456s
Apr 29 13:24:46.287: INFO: Pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059007261s
STEP: Saw pod success
Apr 29 13:24:46.288: INFO: Pod "pod-c2e8705a-387f-40c3-8e73-4d810760cedb" satisfied condition "Succeeded or Failed"
Apr 29 13:24:46.291: INFO: Trying to get logs from node kali-worker2 pod pod-c2e8705a-387f-40c3-8e73-4d810760cedb container test-container: 
STEP: delete the pod
Apr 29 13:24:46.361: INFO: Waiting for pod pod-c2e8705a-387f-40c3-8e73-4d810760cedb to disappear
Apr 29 13:24:46.375: INFO: Pod pod-c2e8705a-387f-40c3-8e73-4d810760cedb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:24:46.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1791" for this suite.

• [SLOW TEST:6.243 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":27,"skipped":425,"failed":0}
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:24:46.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:26:46.516: INFO: Deleting pod "var-expansion-27ff2dcd-5314-4a82-adb8-afac804a08d9" in namespace "var-expansion-9959"
Apr 29 13:26:46.521: INFO: Wait up to 5m0s for pod "var-expansion-27ff2dcd-5314-4a82-adb8-afac804a08d9" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:26:50.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9959" for this suite.

• [SLOW TEST:124.181 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":290,"completed":28,"skipped":425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:26:50.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-976
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating stateful set ss in namespace statefulset-976
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-976
Apr 29 13:26:50.693: INFO: Found 0 stateful pods, waiting for 1
Apr 29 13:27:00.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Apr 29 13:27:00.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:27:04.096: INFO: stderr: "I0429 13:27:03.968368     428 log.go:172] (0xc00003ad10) (0xc00067cfa0) Create stream\nI0429 13:27:03.968424     428 log.go:172] (0xc00003ad10) (0xc00067cfa0) Stream added, broadcasting: 1\nI0429 13:27:03.970444     428 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0429 13:27:03.970487     428 log.go:172] (0xc00003ad10) (0xc000602d20) Create stream\nI0429 13:27:03.970505     428 log.go:172] (0xc00003ad10) (0xc000602d20) Stream added, broadcasting: 3\nI0429 13:27:03.971580     428 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0429 13:27:03.971622     428 log.go:172] (0xc00003ad10) (0xc0005865a0) Create stream\nI0429 13:27:03.971635     428 log.go:172] (0xc00003ad10) (0xc0005865a0) Stream added, broadcasting: 5\nI0429 13:27:03.972549     428 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0429 13:27:04.056581     428 log.go:172] (0xc00003ad10) Data frame received for 5\nI0429 13:27:04.056607     428 log.go:172] (0xc0005865a0) (5) Data frame handling\nI0429 13:27:04.056625     428 log.go:172] (0xc0005865a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:27:04.087788     428 log.go:172] (0xc00003ad10) Data frame received for 3\nI0429 13:27:04.087837     428 log.go:172] (0xc000602d20) (3) Data frame handling\nI0429 13:27:04.087925     428 log.go:172] (0xc000602d20) (3) Data frame sent\nI0429 13:27:04.088126     428 log.go:172] (0xc00003ad10) Data frame received for 3\nI0429 13:27:04.088161     428 log.go:172] (0xc000602d20) (3) Data frame handling\nI0429 13:27:04.088346     428 log.go:172] (0xc00003ad10) Data frame received for 5\nI0429 13:27:04.088380     428 log.go:172] (0xc0005865a0) (5) Data frame handling\nI0429 13:27:04.090432     428 log.go:172] (0xc00003ad10) Data frame received for 1\nI0429 13:27:04.090470     428 log.go:172] (0xc00067cfa0) (1) Data frame handling\nI0429 13:27:04.090509     428 log.go:172] (0xc00067cfa0) (1) Data frame sent\nI0429 13:27:04.090538     428 log.go:172] (0xc00003ad10) (0xc00067cfa0) Stream removed, broadcasting: 1\nI0429 13:27:04.090605     428 log.go:172] (0xc00003ad10) Go away received\nI0429 13:27:04.090993     428 log.go:172] (0xc00003ad10) (0xc00067cfa0) Stream removed, broadcasting: 1\nI0429 13:27:04.091017     428 log.go:172] (0xc00003ad10) (0xc000602d20) Stream removed, broadcasting: 3\nI0429 13:27:04.091030     428 log.go:172] (0xc00003ad10) (0xc0005865a0) Stream removed, broadcasting: 5\n"
Apr 29 13:27:04.096: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:27:04.096: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Apr 29 13:27:04.101: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Apr 29 13:27:14.106: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 13:27:14.106: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:27:14.262: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Apr 29 13:27:14.262: INFO: ss-0  kali-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:14.263: INFO: 
Apr 29 13:27:14.263: INFO: StatefulSet ss has not reached scale 3, at 1
Apr 29 13:27:15.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.853419843s
Apr 29 13:27:16.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.8483418s
Apr 29 13:27:17.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.77480921s
Apr 29 13:27:18.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.771115375s
Apr 29 13:27:19.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.686031569s
Apr 29 13:27:20.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.681418018s
Apr 29 13:27:21.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.590157195s
Apr 29 13:27:22.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.584691997s
Apr 29 13:27:23.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 579.450809ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-976
Apr 29 13:27:24.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Apr 29 13:27:24.784: INFO: stderr: "I0429 13:27:24.684852     459 log.go:172] (0xc0000e08f0) (0xc00055c500) Create stream\nI0429 13:27:24.684907     459 log.go:172] (0xc0000e08f0) (0xc00055c500) Stream added, broadcasting: 1\nI0429 13:27:24.689993     459 log.go:172] (0xc0000e08f0) Reply frame received for 1\nI0429 13:27:24.690036     459 log.go:172] (0xc0000e08f0) (0xc000592500) Create stream\nI0429 13:27:24.690050     459 log.go:172] (0xc0000e08f0) (0xc000592500) Stream added, broadcasting: 3\nI0429 13:27:24.691548     459 log.go:172] (0xc0000e08f0) Reply frame received for 3\nI0429 13:27:24.691582     459 log.go:172] (0xc0000e08f0) (0xc000592a00) Create stream\nI0429 13:27:24.691595     459 log.go:172] (0xc0000e08f0) (0xc000592a00) Stream added, broadcasting: 5\nI0429 13:27:24.692526     459 log.go:172] (0xc0000e08f0) Reply frame received for 5\nI0429 13:27:24.774864     459 log.go:172] (0xc0000e08f0) Data frame received for 5\nI0429 13:27:24.774889     459 log.go:172] (0xc000592a00) (5) Data frame handling\nI0429 13:27:24.774902     459 log.go:172] (0xc000592a00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0429 13:27:24.774921     459 log.go:172] (0xc0000e08f0) Data frame received for 3\nI0429 13:27:24.774929     459 log.go:172] (0xc000592500) (3) Data frame handling\nI0429 13:27:24.774937     459 log.go:172] (0xc000592500) (3) Data frame sent\nI0429 13:27:24.774945     459 log.go:172] (0xc0000e08f0) Data frame received for 3\nI0429 13:27:24.774951     459 log.go:172] (0xc000592500) (3) Data frame handling\nI0429 13:27:24.775008     459 log.go:172] (0xc0000e08f0) Data frame received for 5\nI0429 13:27:24.775034     459 log.go:172] (0xc000592a00) (5) Data frame handling\nI0429 13:27:24.776766     459 log.go:172] (0xc0000e08f0) Data frame received for 1\nI0429 13:27:24.776791     459 log.go:172] (0xc00055c500) (1) Data frame handling\nI0429 13:27:24.776817     459 log.go:172] (0xc00055c500) (1) Data frame sent\nI0429 13:27:24.776831     459 log.go:172] (0xc0000e08f0) (0xc00055c500) Stream removed, broadcasting: 1\nI0429 13:27:24.776954     459 log.go:172] (0xc0000e08f0) Go away received\nI0429 13:27:24.777091     459 log.go:172] (0xc0000e08f0) (0xc00055c500) Stream removed, broadcasting: 1\nI0429 13:27:24.777105     459 log.go:172] (0xc0000e08f0) (0xc000592500) Stream removed, broadcasting: 3\nI0429 13:27:24.777269     459 log.go:172] (0xc0000e08f0) (0xc000592a00) Stream removed, broadcasting: 5\n"
Apr 29 13:27:24.784: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Apr 29 13:27:24.784: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Apr 29 13:27:24.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Apr 29 13:27:24.982: INFO: stderr: "I0429 13:27:24.911214     481 log.go:172] (0xc000c3afd0) (0xc0006f7540) Create stream\nI0429 13:27:24.911274     481 log.go:172] (0xc000c3afd0) (0xc0006f7540) Stream added, broadcasting: 1\nI0429 13:27:24.914787     481 log.go:172] (0xc000c3afd0) Reply frame received for 1\nI0429 13:27:24.914823     481 log.go:172] (0xc000c3afd0) (0xc0006d4aa0) Create stream\nI0429 13:27:24.914833     481 log.go:172] (0xc000c3afd0) (0xc0006d4aa0) Stream added, broadcasting: 3\nI0429 13:27:24.915594     481 log.go:172] (0xc000c3afd0) Reply frame received for 3\nI0429 13:27:24.915617     481 log.go:172] (0xc000c3afd0) (0xc00065c5a0) Create stream\nI0429 13:27:24.915624     481 log.go:172] (0xc000c3afd0) (0xc00065c5a0) Stream added, broadcasting: 5\nI0429 13:27:24.916356     481 log.go:172] (0xc000c3afd0) Reply frame received for 5\nI0429 13:27:24.972111     481 log.go:172] (0xc000c3afd0) Data frame received for 5\nI0429 13:27:24.972143     481 log.go:172] (0xc00065c5a0) (5) Data frame handling\nI0429 13:27:24.972156     481 log.go:172] (0xc00065c5a0) (5) Data frame sent\nI0429 13:27:24.972174     481 log.go:172] (0xc000c3afd0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0429 13:27:24.972195     481 log.go:172] (0xc00065c5a0) (5) Data frame handling\nI0429 13:27:24.972251     481 log.go:172] (0xc000c3afd0) Data frame received for 3\nI0429 13:27:24.972311     481 log.go:172] (0xc0006d4aa0) (3) Data frame handling\nI0429 13:27:24.972353     481 log.go:172] (0xc0006d4aa0) (3) Data frame sent\nI0429 13:27:24.972381     481 log.go:172] (0xc000c3afd0) Data frame received for 3\nI0429 13:27:24.972413     481 log.go:172] (0xc0006d4aa0) (3) Data frame handling\nI0429 13:27:24.973836     481 log.go:172] (0xc000c3afd0) Data frame received for 1\nI0429 13:27:24.973848     481 log.go:172] (0xc0006f7540) (1) Data frame handling\nI0429 13:27:24.973855     481 log.go:172] (0xc0006f7540) (1) Data frame sent\nI0429 13:27:24.973865     481 log.go:172] (0xc000c3afd0) (0xc0006f7540) Stream removed, broadcasting: 1\nI0429 13:27:24.974044     481 log.go:172] (0xc000c3afd0) Go away received\nI0429 13:27:24.974126     481 log.go:172] (0xc000c3afd0) (0xc0006f7540) Stream removed, broadcasting: 1\nI0429 13:27:24.974147     481 log.go:172] (0xc000c3afd0) (0xc0006d4aa0) Stream removed, broadcasting: 3\nI0429 13:27:24.974153     481 log.go:172] (0xc000c3afd0) (0xc00065c5a0) Stream removed, broadcasting: 5\n"
Apr 29 13:27:24.982: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Apr 29 13:27:24.982: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Apr 29 13:27:24.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Apr 29 13:27:25.196: INFO: stderr: "I0429 13:27:25.111883     504 log.go:172] (0xc0009900b0) (0xc0002546e0) Create stream\nI0429 13:27:25.111961     504 log.go:172] (0xc0009900b0) (0xc0002546e0) Stream added, broadcasting: 1\nI0429 13:27:25.114188     504 log.go:172] (0xc0009900b0) Reply frame received for 1\nI0429 13:27:25.114244     504 log.go:172] (0xc0009900b0) (0xc000688500) Create stream\nI0429 13:27:25.114307     504 log.go:172] (0xc0009900b0) (0xc000688500) Stream added, broadcasting: 3\nI0429 13:27:25.115178     504 log.go:172] (0xc0009900b0) Reply frame received for 3\nI0429 13:27:25.115242     504 log.go:172] (0xc0009900b0) (0xc000672be0) Create stream\nI0429 13:27:25.115296     504 log.go:172] (0xc0009900b0) (0xc000672be0) Stream added, broadcasting: 5\nI0429 13:27:25.116349     504 log.go:172] (0xc0009900b0) Reply frame received for 5\nI0429 13:27:25.189938     504 log.go:172] (0xc0009900b0) Data frame received for 3\nI0429 13:27:25.189966     504 log.go:172] (0xc000688500) (3) Data frame handling\nI0429 13:27:25.189978     504 log.go:172] (0xc000688500) (3) Data frame sent\nI0429 13:27:25.189999     504 log.go:172] (0xc0009900b0) Data frame received for 5\nI0429 13:27:25.190024     504 log.go:172] (0xc000672be0) (5) Data frame handling\nI0429 13:27:25.190041     504 log.go:172] (0xc000672be0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0429 13:27:25.190062     504 log.go:172] (0xc0009900b0) Data frame received for 3\nI0429 13:27:25.190080     504 log.go:172] (0xc000688500) (3) Data frame handling\nI0429 13:27:25.190139     504 log.go:172] (0xc0009900b0) Data frame received for 5\nI0429 13:27:25.190149     504 log.go:172] (0xc000672be0) (5) Data frame handling\nI0429 13:27:25.191618     504 log.go:172] (0xc0009900b0) Data frame received for 1\nI0429 13:27:25.191644     504 log.go:172] (0xc0002546e0) (1) Data frame handling\nI0429 13:27:25.191660     504 log.go:172] (0xc0002546e0) (1) Data frame sent\nI0429 13:27:25.191682     504 log.go:172] (0xc0009900b0) (0xc0002546e0) Stream removed, broadcasting: 1\nI0429 13:27:25.191703     504 log.go:172] (0xc0009900b0) Go away received\nI0429 13:27:25.192149     504 log.go:172] (0xc0009900b0) (0xc0002546e0) Stream removed, broadcasting: 1\nI0429 13:27:25.192170     504 log.go:172] (0xc0009900b0) (0xc000688500) Stream removed, broadcasting: 3\nI0429 13:27:25.192180     504 log.go:172] (0xc0009900b0) (0xc000672be0) Stream removed, broadcasting: 5\n"
Apr 29 13:27:25.197: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Apr 29 13:27:25.197: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Apr 29 13:27:25.200: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:27:25.200: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 13:27:25.200: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Apr 29 13:27:25.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:27:25.395: INFO: stderr: "I0429 13:27:25.331873     524 log.go:172] (0xc000a52000) (0xc000529180) Create stream\nI0429 13:27:25.331932     524 log.go:172] (0xc000a52000) (0xc000529180) Stream added, broadcasting: 1\nI0429 13:27:25.335134     524 log.go:172] (0xc000a52000) Reply frame received for 1\nI0429 13:27:25.335164     524 log.go:172] (0xc000a52000) (0xc0004a4d20) Create stream\nI0429 13:27:25.335173     524 log.go:172] (0xc000a52000) (0xc0004a4d20) Stream added, broadcasting: 3\nI0429 13:27:25.335861     524 log.go:172] (0xc000a52000) Reply frame received for 3\nI0429 13:27:25.335884     524 log.go:172] (0xc000a52000) (0xc00066e320) Create stream\nI0429 13:27:25.335893     524 log.go:172] (0xc000a52000) (0xc00066e320) Stream added, broadcasting: 5\nI0429 13:27:25.336689     524 log.go:172] (0xc000a52000) Reply frame received for 5\nI0429 13:27:25.388080     524 log.go:172] (0xc000a52000) Data frame received for 3\nI0429 13:27:25.388143     524 log.go:172] (0xc0004a4d20) (3) Data frame handling\nI0429 13:27:25.388172     524 log.go:172] (0xc0004a4d20) (3) Data frame sent\nI0429 13:27:25.388240     524 log.go:172] (0xc000a52000) Data frame received for 5\nI0429 13:27:25.388260     524 log.go:172] (0xc00066e320) (5) Data frame handling\nI0429 13:27:25.388272     524 log.go:172] (0xc00066e320) (5) Data frame sent\nI0429 13:27:25.388283     524 log.go:172] (0xc000a52000) Data frame received for 5\nI0429 13:27:25.388293     524 log.go:172] (0xc00066e320) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:27:25.388326     524 log.go:172] (0xc000a52000) Data frame received for 3\nI0429 13:27:25.388451     524 log.go:172] (0xc0004a4d20) (3) Data frame handling\nI0429 13:27:25.390079     524 log.go:172] (0xc000a52000) Data frame received for 1\nI0429 13:27:25.390119     524 log.go:172] (0xc000529180) (1) Data frame handling\nI0429 13:27:25.390153     524 log.go:172] (0xc000529180) (1) Data frame sent\nI0429 13:27:25.390184     524 log.go:172] (0xc000a52000) (0xc000529180) Stream removed, broadcasting: 1\nI0429 13:27:25.390210     524 log.go:172] (0xc000a52000) Go away received\nI0429 13:27:25.390653     524 log.go:172] (0xc000a52000) (0xc000529180) Stream removed, broadcasting: 1\nI0429 13:27:25.390681     524 log.go:172] (0xc000a52000) (0xc0004a4d20) Stream removed, broadcasting: 3\nI0429 13:27:25.390700     524 log.go:172] (0xc000a52000) (0xc00066e320) Stream removed, broadcasting: 5\n"
Apr 29 13:27:25.395: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:27:25.395: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Apr 29 13:27:25.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:27:25.630: INFO: stderr: "I0429 13:27:25.523768     544 log.go:172] (0xc000b4f1e0) (0xc000b641e0) Create stream\nI0429 13:27:25.523813     544 log.go:172] (0xc000b4f1e0) (0xc000b641e0) Stream added, broadcasting: 1\nI0429 13:27:25.527458     544 log.go:172] (0xc000b4f1e0) Reply frame received for 1\nI0429 13:27:25.527487     544 log.go:172] (0xc000b4f1e0) (0xc0008465a0) Create stream\nI0429 13:27:25.527494     544 log.go:172] (0xc000b4f1e0) (0xc0008465a0) Stream added, broadcasting: 3\nI0429 13:27:25.528272     544 log.go:172] (0xc000b4f1e0) Reply frame received for 3\nI0429 13:27:25.528310     544 log.go:172] (0xc000b4f1e0) (0xc0005cedc0) Create stream\nI0429 13:27:25.528322     544 log.go:172] (0xc000b4f1e0) (0xc0005cedc0) Stream added, broadcasting: 5\nI0429 13:27:25.529043     544 log.go:172] (0xc000b4f1e0) Reply frame received for 5\nI0429 13:27:25.592155     544 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0429 13:27:25.592186     544 log.go:172] (0xc0005cedc0) (5) Data frame handling\nI0429 13:27:25.592209     544 log.go:172] (0xc0005cedc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:27:25.623388     544 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0429 13:27:25.623422     544 log.go:172] (0xc0008465a0) (3) Data frame handling\nI0429 13:27:25.623490     544 log.go:172] (0xc0008465a0) (3) Data frame sent\nI0429 13:27:25.623750     544 log.go:172] (0xc000b4f1e0) Data frame received for 3\nI0429 13:27:25.623776     544 log.go:172] (0xc0008465a0) (3) Data frame handling\nI0429 13:27:25.623801     544 log.go:172] (0xc000b4f1e0) Data frame received for 5\nI0429 13:27:25.623823     544 log.go:172] (0xc0005cedc0) (5) Data frame handling\nI0429 13:27:25.625582     544 log.go:172] (0xc000b4f1e0) Data frame received for 1\nI0429 13:27:25.625599     544 log.go:172] (0xc000b641e0) (1) Data frame handling\nI0429 13:27:25.625611     544 log.go:172] (0xc000b641e0) (1) Data frame sent\nI0429 13:27:25.625619     544 log.go:172] (0xc000b4f1e0) (0xc000b641e0) Stream removed, broadcasting: 1\nI0429 13:27:25.625631     544 log.go:172] (0xc000b4f1e0) Go away received\nI0429 13:27:25.625989     544 log.go:172] (0xc000b4f1e0) (0xc000b641e0) Stream removed, broadcasting: 1\nI0429 13:27:25.626017     544 log.go:172] (0xc000b4f1e0) (0xc0008465a0) Stream removed, broadcasting: 3\nI0429 13:27:25.626030     544 log.go:172] (0xc000b4f1e0) (0xc0005cedc0) Stream removed, broadcasting: 5\n"
Apr 29 13:27:25.630: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:27:25.630: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Apr 29 13:27:25.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-976 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Apr 29 13:27:25.883: INFO: stderr: "I0429 13:27:25.774485     565 log.go:172] (0xc000adb600) (0xc000b06500) Create stream\nI0429 13:27:25.774552     565 log.go:172] (0xc000adb600) (0xc000b06500) Stream added, broadcasting: 1\nI0429 13:27:25.779464     565 log.go:172] (0xc000adb600) Reply frame received for 1\nI0429 13:27:25.779503     565 log.go:172] (0xc000adb600) (0xc0008526e0) Create stream\nI0429 13:27:25.779517     565 log.go:172] (0xc000adb600) (0xc0008526e0) Stream added, broadcasting: 3\nI0429 13:27:25.780357     565 log.go:172] (0xc000adb600) Reply frame received for 3\nI0429 13:27:25.780395     565 log.go:172] (0xc000adb600) (0xc0008535e0) Create stream\nI0429 13:27:25.780405     565 log.go:172] (0xc000adb600) (0xc0008535e0) Stream added, broadcasting: 5\nI0429 13:27:25.781394     565 log.go:172] (0xc000adb600) Reply frame received for 5\nI0429 13:27:25.836612     565 log.go:172] (0xc000adb600) Data frame received for 5\nI0429 13:27:25.836635     565 log.go:172] (0xc0008535e0) (5) Data frame handling\nI0429 13:27:25.836647     565 log.go:172] (0xc0008535e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0429 13:27:25.875546     565 log.go:172] (0xc000adb600) Data frame received for 3\nI0429 13:27:25.875602     565 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0429 13:27:25.875655     565 log.go:172] (0xc0008526e0) (3) Data frame sent\nI0429 13:27:25.875671     565 log.go:172] (0xc000adb600) Data frame received for 3\nI0429 13:27:25.875680     565 log.go:172] (0xc0008526e0) (3) Data frame handling\nI0429 13:27:25.875692     565 log.go:172] (0xc000adb600) Data frame received for 5\nI0429 13:27:25.875701     565 log.go:172] (0xc0008535e0) (5) Data frame handling\nI0429 13:27:25.878120     565 log.go:172] (0xc000adb600) Data frame received for 1\nI0429 13:27:25.878142     565 log.go:172] (0xc000b06500) (1) Data frame handling\nI0429 13:27:25.878163     565 log.go:172] (0xc000b06500) (1) Data frame sent\nI0429 13:27:25.878183     565 log.go:172] (0xc000adb600) (0xc000b06500) Stream removed, broadcasting: 1\nI0429 13:27:25.878280     565 log.go:172] (0xc000adb600) Go away received\nI0429 13:27:25.878543     565 log.go:172] (0xc000adb600) (0xc000b06500) Stream removed, broadcasting: 1\nI0429 13:27:25.878553     565 log.go:172] (0xc000adb600) (0xc0008526e0) Stream removed, broadcasting: 3\nI0429 13:27:25.878559     565 log.go:172] (0xc000adb600) (0xc0008535e0) Stream removed, broadcasting: 5\n"
Apr 29 13:27:25.883: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Apr 29 13:27:25.883: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Apr 29 13:27:25.883: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:27:25.908: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Apr 29 13:27:35.916: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 13:27:35.916: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 13:27:35.916: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 13:27:35.952: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:35.952: INFO: ss-0  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:35.952: INFO: ss-1  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:35.952: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:35.952: INFO: 
Apr 29 13:27:35.952: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:36.975: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:36.975: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:36.975: INFO: ss-1  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:36.975: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:36.975: INFO: 
Apr 29 13:27:36.975: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:37.986: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:37.987: INFO: ss-0  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:37.987: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:37.987: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:37.987: INFO: 
Apr 29 13:27:37.987: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:39.011: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:39.011: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:39.011: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:39.011: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:39.011: INFO: 
Apr 29 13:27:39.011: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:40.016: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:40.016: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:40.016: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:40.016: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:40.017: INFO: 
Apr 29 13:27:40.017: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:41.022: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:41.022: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:41.022: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:41.023: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:41.023: INFO: 
Apr 29 13:27:41.023: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:42.027: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:42.027: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:42.027: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:42.027: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:42.027: INFO: 
Apr 29 13:27:42.027: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:43.032: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Apr 29 13:27:43.032: INFO: ss-0  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:26:50 +0000 UTC  }]
Apr 29 13:27:43.032: INFO: ss-1  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:25 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:43.032: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-29 13:27:14 +0000 UTC  }]
Apr 29 13:27:43.032: INFO: 
Apr 29 13:27:43.032: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 13:27:44.036: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.891660778s
Apr 29 13:27:45.088: INFO: Verifying statefulset ss doesn't scale past 0 for another 887.220182ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-976
Apr 29 13:27:46.092: INFO: Scaling statefulset ss to 0
Apr 29 13:27:46.102: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Apr 29 13:27:46.104: INFO: Deleting all statefulset in ns statefulset-976
Apr 29 13:27:46.107: INFO: Scaling statefulset ss to 0
Apr 29 13:27:46.116: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:27:46.119: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:27:46.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-976" for this suite.

• [SLOW TEST:55.611 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":290,"completed":29,"skipped":447,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:27:46.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:27:50.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2274" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":30,"skipped":450,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:27:50.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-6d147179-d507-4c97-b869-66bee562c298
STEP: Creating a pod to test consume configMaps
Apr 29 13:27:50.412: INFO: Waiting up to 5m0s for pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b" in namespace "configmap-2845" to be "Succeeded or Failed"
Apr 29 13:27:50.428: INFO: Pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.364742ms
Apr 29 13:27:53.119: INFO: Pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.70688966s
Apr 29 13:27:55.124: INFO: Pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711739465s
Apr 29 13:27:57.128: INFO: Pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.716544161s
STEP: Saw pod success
Apr 29 13:27:57.129: INFO: Pod "pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b" satisfied condition "Succeeded or Failed"
Apr 29 13:27:57.134: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b container configmap-volume-test: 
STEP: delete the pod
Apr 29 13:27:57.187: INFO: Waiting for pod pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b to disappear
Apr 29 13:27:57.220: INFO: Pod pod-configmaps-e17827c3-3018-42e3-a91d-0ce026b14e6b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:27:57.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2845" for this suite.

• [SLOW TEST:6.930 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":31,"skipped":455,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:27:57.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 13:28:00.440: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:00.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8583" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":32,"skipped":455,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:00.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Apr 29 13:28:07.333: INFO: Successfully updated pod "pod-update-activedeadlineseconds-07d34d92-e71f-4a18-8232-f1b4f4aea09c"
Apr 29 13:28:07.333: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-07d34d92-e71f-4a18-8232-f1b4f4aea09c" in namespace "pods-9178" to be "terminated due to deadline exceeded"
Apr 29 13:28:07.382: INFO: Pod "pod-update-activedeadlineseconds-07d34d92-e71f-4a18-8232-f1b4f4aea09c": Phase="Running", Reason="", readiness=true. Elapsed: 48.482417ms
Apr 29 13:28:09.386: INFO: Pod "pod-update-activedeadlineseconds-07d34d92-e71f-4a18-8232-f1b4f4aea09c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.05248205s
Apr 29 13:28:09.386: INFO: Pod "pod-update-activedeadlineseconds-07d34d92-e71f-4a18-8232-f1b4f4aea09c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9178" for this suite.

• [SLOW TEST:8.747 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":290,"completed":33,"skipped":471,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:09.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-dtvs
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 13:28:09.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dtvs" in namespace "subpath-5029" to be "Succeeded or Failed"
Apr 29 13:28:09.531: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Pending", Reason="", readiness=false. Elapsed: 58.982282ms
Apr 29 13:28:11.535: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063176873s
Apr 29 13:28:13.539: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 4.066545693s
Apr 29 13:28:15.543: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 6.071067348s
Apr 29 13:28:17.548: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 8.075560533s
Apr 29 13:28:19.552: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 10.080024612s
Apr 29 13:28:21.556: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 12.083908057s
Apr 29 13:28:23.560: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 14.087697391s
Apr 29 13:28:25.564: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 16.092268565s
Apr 29 13:28:27.568: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 18.096477285s
Apr 29 13:28:29.573: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 20.101259936s
Apr 29 13:28:31.577: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Running", Reason="", readiness=true. Elapsed: 22.105314818s
Apr 29 13:28:33.747: INFO: Pod "pod-subpath-test-configmap-dtvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.275234312s
STEP: Saw pod success
Apr 29 13:28:33.747: INFO: Pod "pod-subpath-test-configmap-dtvs" satisfied condition "Succeeded or Failed"
Apr 29 13:28:33.750: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-dtvs container test-container-subpath-configmap-dtvs: 
STEP: delete the pod
Apr 29 13:28:33.880: INFO: Waiting for pod pod-subpath-test-configmap-dtvs to disappear
Apr 29 13:28:33.890: INFO: Pod pod-subpath-test-configmap-dtvs no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dtvs
Apr 29 13:28:33.890: INFO: Deleting pod "pod-subpath-test-configmap-dtvs" in namespace "subpath-5029"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:33.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5029" for this suite.

• [SLOW TEST:24.502 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":290,"completed":34,"skipped":475,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:33.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:33.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9764" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":290,"completed":35,"skipped":482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:34.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:28:34.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:28:36.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763714, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763714, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763714, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763714, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:28:39.928: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3601" for this suite.
STEP: Destroying namespace "webhook-3601-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.994 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":290,"completed":36,"skipped":513,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:40.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:28:40.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815" in namespace "downward-api-3611" to be "Succeeded or Failed"
Apr 29 13:28:40.197: INFO: Pod "downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815": Phase="Pending", Reason="", readiness=false. Elapsed: 38.349328ms
Apr 29 13:28:42.202: INFO: Pod "downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043014805s
Apr 29 13:28:44.207: INFO: Pod "downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047531767s
STEP: Saw pod success
Apr 29 13:28:44.207: INFO: Pod "downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815" satisfied condition "Succeeded or Failed"
Apr 29 13:28:44.210: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815 container client-container: 
STEP: delete the pod
Apr 29 13:28:44.282: INFO: Waiting for pod downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815 to disappear
Apr 29 13:28:44.292: INFO: Pod downwardapi-volume-e70d6df9-8f4a-469c-af97-d8c1e470b815 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:44.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3611" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":290,"completed":37,"skipped":521,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:44.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Apr 29 13:28:44.359: INFO: Waiting up to 5m0s for pod "var-expansion-09c87641-050a-4949-837a-1901ab391d80" in namespace "var-expansion-3541" to be "Succeeded or Failed"
Apr 29 13:28:44.372: INFO: Pod "var-expansion-09c87641-050a-4949-837a-1901ab391d80": Phase="Pending", Reason="", readiness=false. Elapsed: 12.706374ms
Apr 29 13:28:46.376: INFO: Pod "var-expansion-09c87641-050a-4949-837a-1901ab391d80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016713895s
Apr 29 13:28:48.382: INFO: Pod "var-expansion-09c87641-050a-4949-837a-1901ab391d80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022967021s
STEP: Saw pod success
Apr 29 13:28:48.382: INFO: Pod "var-expansion-09c87641-050a-4949-837a-1901ab391d80" satisfied condition "Succeeded or Failed"
Apr 29 13:28:48.385: INFO: Trying to get logs from node kali-worker pod var-expansion-09c87641-050a-4949-837a-1901ab391d80 container dapi-container: 
STEP: delete the pod
Apr 29 13:28:48.418: INFO: Waiting for pod var-expansion-09c87641-050a-4949-837a-1901ab391d80 to disappear
Apr 29 13:28:48.442: INFO: Pod var-expansion-09c87641-050a-4949-837a-1901ab391d80 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:48.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3541" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":290,"completed":38,"skipped":529,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:48.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-0391581c-5459-4d25-82c4-b2db124f79a5
STEP: Creating a pod to test consume configMaps
Apr 29 13:28:48.587: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1" in namespace "projected-8015" to be "Succeeded or Failed"
Apr 29 13:28:48.605: INFO: Pod "pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.160992ms
Apr 29 13:28:50.609: INFO: Pod "pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022272209s
Apr 29 13:28:52.613: INFO: Pod "pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026450257s
STEP: Saw pod success
Apr 29 13:28:52.613: INFO: Pod "pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1" satisfied condition "Succeeded or Failed"
Apr 29 13:28:52.616: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 13:28:52.628: INFO: Waiting for pod pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1 to disappear
Apr 29 13:28:52.633: INFO: Pod pod-projected-configmaps-3687d792-bf57-4b4a-8f9e-d9f4b01e59c1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:28:52.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8015" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":39,"skipped":529,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:28:52.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:29:08.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3072" for this suite.

• [SLOW TEST:16.148 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":290,"completed":40,"skipped":545,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:29:08.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:29:09.020: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Apr 29 13:29:09.107: INFO: Pod name sample-pod: Found 0 pods out of 1
Apr 29 13:29:14.126: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 29 13:29:14.126: INFO: Creating deployment "test-rolling-update-deployment"
Apr 29 13:29:14.137: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Apr 29 13:29:14.257: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Apr 29 13:29:16.265: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Apr 29 13:29:16.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763754, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763754, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763754, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763754, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 13:29:18.275: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
Apr 29 13:29:18.284: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-5602 /apis/apps/v1/namespaces/deployment-5602/deployments/test-rolling-update-deployment d78dd2fb-2ba0-4aff-b5d4-da71e6653de2 62523 1 2020-04-29 13:29:14 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-04-29 13:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 13:29:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00292e908  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-29 13:29:14 +0000 UTC,LastTransitionTime:2020-04-29 13:29:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-04-29 13:29:17 +0000 UTC,LastTransitionTime:2020-04-29 13:29:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Apr 29 13:29:18.286: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b  deployment-5602 /apis/apps/v1/namespaces/deployment-5602/replicasets/test-rolling-update-deployment-df7bb669b b8d39616-9e5a-4fa5-b93b-76f200172929 62512 1 2020-04-29 13:29:14 +0000 UTC   map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d78dd2fb-2ba0-4aff-b5d4-da71e6653de2 0xc00292f3c0 0xc00292f3c1}] []  [{kube-controller-manager Update apps/v1 2020-04-29 13:29:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d78dd2fb-2ba0-4aff-b5d4-da71e6653de2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:df7bb669b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00292f438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Apr 29 13:29:18.286: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Apr 29 13:29:18.287: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-5602 /apis/apps/v1/namespaces/deployment-5602/replicasets/test-rolling-update-controller 272daf7e-c622-4016-82fe-87d0b7d3a74b 62522 2 2020-04-29 13:29:09 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d78dd2fb-2ba0-4aff-b5d4-da71e6653de2 0xc00292f247 0xc00292f248}] []  [{e2e.test Update apps/v1 2020-04-29 13:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 13:29:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d78dd2fb-2ba0-4aff-b5d4-da71e6653de2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00292f358  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 13:29:18.290: INFO: Pod "test-rolling-update-deployment-df7bb669b-9kpzg" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-9kpzg test-rolling-update-deployment-df7bb669b- deployment-5602 /api/v1/namespaces/deployment-5602/pods/test-rolling-update-deployment-df7bb669b-9kpzg 36b3db4a-11fa-46e1-b8e7-5f5bd1b43d35 62511 0 2020-04-29 13:29:14 +0000 UTC   map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b b8d39616-9e5a-4fa5-b93b-76f200172929 0xc00292f8f0 0xc00292f8f1}] []  [{kube-controller-manager Update v1 2020-04-29 13:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8d39616-9e5a-4fa5-b93b-76f200172929\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 13:29:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-snpmx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-snpmx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-snpmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:29:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:29:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:29:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:29:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.46,StartTime:2020-04-29 13:29:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 13:29:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://05f379e649e5dc6bff4a543d5272098f1dd1988e79f18d1baed3e5aca7daabae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:29:18.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5602" for this suite.

• [SLOW TEST:9.472 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":290,"completed":41,"skipped":553,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:29:18.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:29:35.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5562" for this suite.

• [SLOW TEST:17.301 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":290,"completed":42,"skipped":556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:29:35.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-2229
STEP: creating service affinity-clusterip-transition in namespace services-2229
STEP: creating replication controller affinity-clusterip-transition in namespace services-2229
I0429 13:29:35.815421       7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2229, replica count: 3
I0429 13:29:38.865895       7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:29:41.866158       7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:29:41.871: INFO: Creating new exec pod
Apr 29 13:29:46.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2229 execpod-affinity7wkjr -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80'
Apr 29 13:29:47.116: INFO: stderr: "I0429 13:29:47.020852     585 log.go:172] (0xc000a67810) (0xc0006d15e0) Create stream\nI0429 13:29:47.020913     585 log.go:172] (0xc000a67810) (0xc0006d15e0) Stream added, broadcasting: 1\nI0429 13:29:47.025378     585 log.go:172] (0xc000a67810) Reply frame received for 1\nI0429 13:29:47.025426     585 log.go:172] (0xc000a67810) (0xc00053c320) Create stream\nI0429 13:29:47.025439     585 log.go:172] (0xc000a67810) (0xc00053c320) Stream added, broadcasting: 3\nI0429 13:29:47.026363     585 log.go:172] (0xc000a67810) Reply frame received for 3\nI0429 13:29:47.026402     585 log.go:172] (0xc000a67810) (0xc00053d2c0) Create stream\nI0429 13:29:47.026412     585 log.go:172] (0xc000a67810) (0xc00053d2c0) Stream added, broadcasting: 5\nI0429 13:29:47.027262     585 log.go:172] (0xc000a67810) Reply frame received for 5\nI0429 13:29:47.107990     585 log.go:172] (0xc000a67810) Data frame received for 5\nI0429 13:29:47.108030     585 log.go:172] (0xc00053d2c0) (5) Data frame handling\nI0429 13:29:47.108053     585 log.go:172] (0xc00053d2c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0429 13:29:47.108308     585 log.go:172] (0xc000a67810) Data frame received for 5\nI0429 13:29:47.108336     585 log.go:172] (0xc00053d2c0) (5) Data frame handling\nI0429 13:29:47.108348     585 log.go:172] (0xc00053d2c0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0429 13:29:47.108706     585 log.go:172] (0xc000a67810) Data frame received for 5\nI0429 13:29:47.108725     585 log.go:172] (0xc00053d2c0) (5) Data frame handling\nI0429 13:29:47.108777     585 log.go:172] (0xc000a67810) Data frame received for 3\nI0429 13:29:47.108843     585 log.go:172] (0xc00053c320) (3) Data frame handling\nI0429 13:29:47.110822     585 log.go:172] (0xc000a67810) Data frame received for 1\nI0429 13:29:47.110848     585 log.go:172] (0xc0006d15e0) (1) Data frame handling\nI0429 13:29:47.110862     585 log.go:172] (0xc0006d15e0) (1) Data frame sent\nI0429 13:29:47.110880     585 log.go:172] (0xc000a67810) (0xc0006d15e0) Stream removed, broadcasting: 1\nI0429 13:29:47.110924     585 log.go:172] (0xc000a67810) Go away received\nI0429 13:29:47.111345     585 log.go:172] (0xc000a67810) (0xc0006d15e0) Stream removed, broadcasting: 1\nI0429 13:29:47.111366     585 log.go:172] (0xc000a67810) (0xc00053c320) Stream removed, broadcasting: 3\nI0429 13:29:47.111385     585 log.go:172] (0xc000a67810) (0xc00053d2c0) Stream removed, broadcasting: 5\n"
Apr 29 13:29:47.116: INFO: stdout: ""
Apr 29 13:29:47.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2229 execpod-affinity7wkjr -- /bin/sh -x -c nc -zv -t -w 2 10.106.109.158 80'
Apr 29 13:29:47.304: INFO: stderr: "I0429 13:29:47.240903     605 log.go:172] (0xc00003ac60) (0xc000136f00) Create stream\nI0429 13:29:47.240963     605 log.go:172] (0xc00003ac60) (0xc000136f00) Stream added, broadcasting: 1\nI0429 13:29:47.243676     605 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0429 13:29:47.243700     605 log.go:172] (0xc00003ac60) (0xc00012e140) Create stream\nI0429 13:29:47.243707     605 log.go:172] (0xc00003ac60) (0xc00012e140) Stream added, broadcasting: 3\nI0429 13:29:47.244456     605 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0429 13:29:47.244487     605 log.go:172] (0xc00003ac60) (0xc0001379a0) Create stream\nI0429 13:29:47.244496     605 log.go:172] (0xc00003ac60) (0xc0001379a0) Stream added, broadcasting: 5\nI0429 13:29:47.245307     605 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0429 13:29:47.298103     605 log.go:172] (0xc00003ac60) Data frame received for 3\nI0429 13:29:47.298134     605 log.go:172] (0xc00012e140) (3) Data frame handling\nI0429 13:29:47.298154     605 log.go:172] (0xc00003ac60) Data frame received for 5\nI0429 13:29:47.298173     605 log.go:172] (0xc0001379a0) (5) Data frame handling\nI0429 13:29:47.298194     605 log.go:172] (0xc0001379a0) (5) Data frame sent\nI0429 13:29:47.298212     605 log.go:172] (0xc00003ac60) Data frame received for 5\nI0429 13:29:47.298227     605 log.go:172] (0xc0001379a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.109.158 80\nConnection to 10.106.109.158 80 port [tcp/http] succeeded!\nI0429 13:29:47.299390     605 log.go:172] (0xc00003ac60) Data frame received for 1\nI0429 13:29:47.299415     605 log.go:172] (0xc000136f00) (1) Data frame handling\nI0429 13:29:47.299431     605 log.go:172] (0xc000136f00) (1) Data frame sent\nI0429 13:29:47.299448     605 log.go:172] (0xc00003ac60) (0xc000136f00) Stream removed, broadcasting: 1\nI0429 13:29:47.299474     605 log.go:172] (0xc00003ac60) Go away received\nI0429 13:29:47.299965     605 log.go:172] (0xc00003ac60) (0xc000136f00) Stream removed, broadcasting: 1\nI0429 13:29:47.299987     605 log.go:172] (0xc00003ac60) (0xc00012e140) Stream removed, broadcasting: 3\nI0429 13:29:47.299998     605 log.go:172] (0xc00003ac60) (0xc0001379a0) Stream removed, broadcasting: 5\n"
Apr 29 13:29:47.304: INFO: stdout: ""
Apr 29 13:29:47.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2229 execpod-affinity7wkjr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.109.158:80/ ; done'
Apr 29 13:29:47.621: INFO: stderr: "I0429 13:29:47.468395     626 log.go:172] (0xc000ab78c0) (0xc000bc03c0) Create stream\nI0429 13:29:47.468449     626 log.go:172] (0xc000ab78c0) (0xc000bc03c0) Stream added, broadcasting: 1\nI0429 13:29:47.475928     626 log.go:172] (0xc000ab78c0) Reply frame received for 1\nI0429 13:29:47.475976     626 log.go:172] (0xc000ab78c0) (0xc00072e6e0) Create stream\nI0429 13:29:47.475989     626 log.go:172] (0xc000ab78c0) (0xc00072e6e0) Stream added, broadcasting: 3\nI0429 13:29:47.476912     626 log.go:172] (0xc000ab78c0) Reply frame received for 3\nI0429 13:29:47.476942     626 log.go:172] (0xc000ab78c0) (0xc0005e83c0) Create stream\nI0429 13:29:47.476953     626 log.go:172] (0xc000ab78c0) (0xc0005e83c0) Stream added, broadcasting: 5\nI0429 13:29:47.477924     626 log.go:172] (0xc000ab78c0) Reply frame received for 5\nI0429 13:29:47.528848     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.528885     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.528900     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.528928     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.528939     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.528950     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.533422     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.533444     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.533471     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.533745     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.533772     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.533788     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.533810     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.533825     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.533840     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.538070     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.538089     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.538104     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.538804     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.538819     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.538827     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/I0429 13:29:47.538838     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.538857     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.538870     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.538883     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.538890     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.538898     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n\nI0429 13:29:47.546615     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.546641     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.546689     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.547365     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.547395     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.547453     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.547483     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.547524     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.547568     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.552811     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.552832     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.552851     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.553448     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.553480     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.553494     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.553507     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.553520     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.553532     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.557650     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.557725     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.557749     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.557998     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.558031     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.558057     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.558080     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.558091     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.558103     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.562234     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.562274     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.562305     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.562708     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.562734     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.562748     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.562764     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.562785     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.562805     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.566900     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.566915     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.566934     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.567477     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.567511     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.567525     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.567549     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.567565     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.567576     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.571601     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.571626     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.571646     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.572455     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.572500     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.572516     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.572538     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.572549     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.572567     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.577401     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.577435     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.577468     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.578184     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.578208     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.578221     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.578245     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.578258     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.578278     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.582559     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.582582     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.582595     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.582825     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.582848     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.582860     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.583015     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.583044     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.583061     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.588362     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.588383     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.588402     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.588835     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.588855     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.588873     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\nI0429 13:29:47.588889     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.588898     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.588921     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\nI0429 13:29:47.589593     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.589615     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.589631     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.593003     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.593018     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.593029     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.593715     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.593727     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.593733     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.593799     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.593812     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.593824     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.598345     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.598355     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.598362     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.598783     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.598805     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.598816     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.598839     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.598882     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.598905     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.602737     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.602761     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.602782     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.603479     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.603515     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.603529     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.603547     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.603581     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.603599     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.607760     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.607777     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.607792     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.608211     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.608241     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.608273     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0429 13:29:47.608422     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.608446     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.608458     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.608476     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.608486     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.608498     626 log.go:172] (0xc0005e83c0) (5) Data frame sent\n 2 http://10.106.109.158:80/\nI0429 13:29:47.613434     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.613543     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.613588     626 log.go:172] (0xc00072e6e0) (3) Data frame sent\nI0429 13:29:47.613820     626 log.go:172] (0xc000ab78c0) Data frame received for 5\nI0429 13:29:47.613856     626 log.go:172] (0xc0005e83c0) (5) Data frame handling\nI0429 13:29:47.614043     626 log.go:172] (0xc000ab78c0) Data frame received for 3\nI0429 13:29:47.614073     626 log.go:172] (0xc00072e6e0) (3) Data frame handling\nI0429 13:29:47.615677     626 log.go:172] (0xc000ab78c0) Data frame received for 1\nI0429 13:29:47.615725     626 log.go:172] (0xc000bc03c0) (1) Data frame handling\nI0429 13:29:47.615751     626 log.go:172] (0xc000bc03c0) (1) Data frame sent\nI0429 13:29:47.615788     626 log.go:172] (0xc000ab78c0) (0xc000bc03c0) Stream removed, broadcasting: 1\nI0429 13:29:47.615901     626 log.go:172] (0xc000ab78c0) Go away received\nI0429 13:29:47.616283     626 log.go:172] (0xc000ab78c0) (0xc000bc03c0) Stream removed, broadcasting: 1\nI0429 13:29:47.616303     626 log.go:172] (0xc000ab78c0) (0xc00072e6e0) Stream removed, broadcasting: 3\nI0429 13:29:47.616314     626 log.go:172] (0xc000ab78c0) (0xc0005e83c0) Stream removed, broadcasting: 5\n"
Apr 29 13:29:47.622: INFO: stdout: "\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-c5r46\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-lthqw\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-c5r46\naffinity-clusterip-transition-c5r46\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-c5r46"
Apr 29 13:29:47.622: INFO: Received response from host: 
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-c5r46
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-lthqw
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-c5r46
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-c5r46
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.622: INFO: Received response from host: affinity-clusterip-transition-c5r46
Apr 29 13:29:47.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2229 execpod-affinity7wkjr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.109.158:80/ ; done'
Apr 29 13:29:47.952: INFO: stderr: "I0429 13:29:47.769330     645 log.go:172] (0xc000aa3550) (0xc000620d20) Create stream\nI0429 13:29:47.769377     645 log.go:172] (0xc000aa3550) (0xc000620d20) Stream added, broadcasting: 1\nI0429 13:29:47.771549     645 log.go:172] (0xc000aa3550) Reply frame received for 1\nI0429 13:29:47.771580     645 log.go:172] (0xc000aa3550) (0xc000249e00) Create stream\nI0429 13:29:47.771595     645 log.go:172] (0xc000aa3550) (0xc000249e00) Stream added, broadcasting: 3\nI0429 13:29:47.772512     645 log.go:172] (0xc000aa3550) Reply frame received for 3\nI0429 13:29:47.772553     645 log.go:172] (0xc000aa3550) (0xc00067aaa0) Create stream\nI0429 13:29:47.772566     645 log.go:172] (0xc000aa3550) (0xc00067aaa0) Stream added, broadcasting: 5\nI0429 13:29:47.773518     645 log.go:172] (0xc000aa3550) Reply frame received for 5\nI0429 13:29:47.847210     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.847320     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.847376     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.847446     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.847459     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.847472     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.853805     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.853825     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.853839     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.854586     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.854622     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.854641     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.854659     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.854669     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.854677     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.858341     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.858355     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.858370     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.858822     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.858846     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.858863     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.858883     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.858916     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.858930     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.864629     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.864663     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.864683     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.865487     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.865504     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.865516     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.865620     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.865638     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.865653     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.871040     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.871075     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.871102     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.871274     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.871293     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.871311     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.871323     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.871331     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.871347     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.878891     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.878909     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.878923     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.879735     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.879759     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.879793     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.879812     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.879830     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.879840     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.884691     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.884714     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.884730     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.885542     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.885581     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.885594     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.885613     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.885632     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.885646     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.890759     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.890798     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.890834     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.891176     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.891199     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.891221     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.891399     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.891411     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.891418     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.895658     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.895676     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.895693     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.896464     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.896491     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.896511     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.896546     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.896562     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.896583     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.901724     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.901742     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.901756     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.902298     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.902324     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.902335     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.902358     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.902374     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.902414     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\nI0429 13:29:47.902434     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.902444     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.902467     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\nI0429 13:29:47.907036     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.907056     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.907242     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.907769     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.907782     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.907789     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.907816     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.907845     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.907867     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.911878     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.911912     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.911943     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.912211     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.912238     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.912254     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.912270     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.912279     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.912286     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.918587     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.918610     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.918627     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.919265     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.919279     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.919288     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.919306     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.919320     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.919340     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.925940     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.925977     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.926023     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.926782     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.926812     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.926830     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.926852     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.926862     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.926871     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.930021     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.930044     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.930075     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.930606     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.930633     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.930647     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.930665     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.930675     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.930686     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.936849     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.936885     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.936926     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.937621     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.937641     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.937651     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.937669     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.937689     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.937708     645 log.go:172] (0xc00067aaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.109.158:80/\nI0429 13:29:47.943897     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.943923     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.943951     645 log.go:172] (0xc000249e00) (3) Data frame sent\nI0429 13:29:47.944636     645 log.go:172] (0xc000aa3550) Data frame received for 3\nI0429 13:29:47.944656     645 log.go:172] (0xc000249e00) (3) Data frame handling\nI0429 13:29:47.944856     645 log.go:172] (0xc000aa3550) Data frame received for 5\nI0429 13:29:47.944880     645 log.go:172] (0xc00067aaa0) (5) Data frame handling\nI0429 13:29:47.946697     645 log.go:172] (0xc000aa3550) Data frame received for 1\nI0429 13:29:47.946733     645 log.go:172] (0xc000620d20) (1) Data frame handling\nI0429 13:29:47.946752     645 log.go:172] (0xc000620d20) (1) Data frame sent\nI0429 13:29:47.946779     645 log.go:172] (0xc000aa3550) (0xc000620d20) Stream removed, broadcasting: 1\nI0429 13:29:47.946811     645 log.go:172] (0xc000aa3550) Go away received\nI0429 13:29:47.947243     645 log.go:172] (0xc000aa3550) (0xc000620d20) Stream removed, broadcasting: 1\nI0429 13:29:47.947272     645 log.go:172] (0xc000aa3550) (0xc000249e00) Stream removed, broadcasting: 3\nI0429 13:29:47.947291     645 log.go:172] (0xc000aa3550) (0xc00067aaa0) Stream removed, broadcasting: 5\n"
Apr 29 13:29:47.952: INFO: stdout: "\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p\naffinity-clusterip-transition-2dm8p"
Apr 29 13:29:47.952: INFO: Received response from host: 
Apr 29 13:29:47.952: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.952: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.952: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.952: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.952: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Received response from host: affinity-clusterip-transition-2dm8p
Apr 29 13:29:47.953: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2229, will wait for the garbage collector to delete the pods
Apr 29 13:29:48.058: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.714392ms
Apr 29 13:29:48.458: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.256904ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:03.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2229" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:28.238 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":43,"skipped":600,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:03.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-22rj
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 13:30:03.916: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-22rj" in namespace "subpath-8236" to be "Succeeded or Failed"
Apr 29 13:30:03.957: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Pending", Reason="", readiness=false. Elapsed: 40.692763ms
Apr 29 13:30:05.961: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044865325s
Apr 29 13:30:07.965: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 4.049147272s
Apr 29 13:30:09.969: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 6.052800901s
Apr 29 13:30:11.973: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 8.057084284s
Apr 29 13:30:13.982: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 10.065576698s
Apr 29 13:30:15.999: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 12.083111583s
Apr 29 13:30:18.018: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 14.101367878s
Apr 29 13:30:20.021: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 16.105075914s
Apr 29 13:30:22.025: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 18.109010811s
Apr 29 13:30:24.028: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 20.111996879s
Apr 29 13:30:26.072: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Running", Reason="", readiness=true. Elapsed: 22.155221845s
Apr 29 13:30:28.077: INFO: Pod "pod-subpath-test-projected-22rj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.160623322s
STEP: Saw pod success
Apr 29 13:30:28.077: INFO: Pod "pod-subpath-test-projected-22rj" satisfied condition "Succeeded or Failed"
Apr 29 13:30:28.080: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-22rj container test-container-subpath-projected-22rj: 
STEP: delete the pod
Apr 29 13:30:28.277: INFO: Waiting for pod pod-subpath-test-projected-22rj to disappear
Apr 29 13:30:28.283: INFO: Pod pod-subpath-test-projected-22rj no longer exists
STEP: Deleting pod pod-subpath-test-projected-22rj
Apr 29 13:30:28.283: INFO: Deleting pod "pod-subpath-test-projected-22rj" in namespace "subpath-8236"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:28.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8236" for this suite.

• [SLOW TEST:24.454 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":290,"completed":44,"skipped":623,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:28.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name s-test-opt-del-bfc725f5-8d69-4d7e-b5dd-f553e940d207
STEP: Creating secret with name s-test-opt-upd-7cd3d412-65ea-4443-bd08-49574752bc89
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bfc725f5-8d69-4d7e-b5dd-f553e940d207
STEP: Updating secret s-test-opt-upd-7cd3d412-65ea-4443-bd08-49574752bc89
STEP: Creating secret with name s-test-opt-create-4c49b431-bb57-4e7f-a360-2a38e02b2708
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:36.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5417" for this suite.

• [SLOW TEST:8.249 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":45,"skipped":624,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:36.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:30:37.437: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:30:39.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 13:30:41.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763837, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:30:44.604: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:45.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7569" for this suite.
STEP: Destroying namespace "webhook-7569-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.929 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":290,"completed":46,"skipped":629,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:45.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1011" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":290,"completed":47,"skipped":648,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:49.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0429 13:30:50.797719       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 13:30:50.797: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:50.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1389" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":290,"completed":48,"skipped":674,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:50.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Apr 29 13:30:50.947: INFO: Waiting up to 5m0s for pod "downward-api-95075188-6c56-426b-b502-833b04613c11" in namespace "downward-api-2512" to be "Succeeded or Failed"
Apr 29 13:30:51.018: INFO: Pod "downward-api-95075188-6c56-426b-b502-833b04613c11": Phase="Pending", Reason="", readiness=false. Elapsed: 70.845132ms
Apr 29 13:30:53.048: INFO: Pod "downward-api-95075188-6c56-426b-b502-833b04613c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100720001s
Apr 29 13:30:55.142: INFO: Pod "downward-api-95075188-6c56-426b-b502-833b04613c11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195128282s
Apr 29 13:30:57.146: INFO: Pod "downward-api-95075188-6c56-426b-b502-833b04613c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.198804667s
STEP: Saw pod success
Apr 29 13:30:57.146: INFO: Pod "downward-api-95075188-6c56-426b-b502-833b04613c11" satisfied condition "Succeeded or Failed"
Apr 29 13:30:57.149: INFO: Trying to get logs from node kali-worker pod downward-api-95075188-6c56-426b-b502-833b04613c11 container dapi-container: 
STEP: delete the pod
Apr 29 13:30:57.202: INFO: Waiting for pod downward-api-95075188-6c56-426b-b502-833b04613c11 to disappear
Apr 29 13:30:57.206: INFO: Pod downward-api-95075188-6c56-426b-b502-833b04613c11 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:30:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2512" for this suite.

• [SLOW TEST:6.436 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":290,"completed":49,"skipped":681,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:30:57.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:31:13.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2154" for this suite.

• [SLOW TEST:16.320 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":290,"completed":50,"skipped":690,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:31:13.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-1239
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1239
STEP: Creating statefulset with conflicting port in namespace statefulset-1239
STEP: Waiting until pod test-pod will start running in namespace statefulset-1239
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1239
Apr 29 13:31:19.864: INFO: Observed stateful pod in namespace: statefulset-1239, name: ss-0, uid: c203c460-57f9-4995-8d8f-f1316953eba3, status phase: Pending. Waiting for statefulset controller to delete.
Apr 29 13:31:19.999: INFO: Observed stateful pod in namespace: statefulset-1239, name: ss-0, uid: c203c460-57f9-4995-8d8f-f1316953eba3, status phase: Failed. Waiting for statefulset controller to delete.
Apr 29 13:31:20.025: INFO: Observed stateful pod in namespace: statefulset-1239, name: ss-0, uid: c203c460-57f9-4995-8d8f-f1316953eba3, status phase: Failed. Waiting for statefulset controller to delete.
Apr 29 13:31:20.062: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1239
STEP: Removing pod with conflicting port in namespace statefulset-1239
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1239 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Apr 29 13:31:24.170: INFO: Deleting all statefulset in ns statefulset-1239
Apr 29 13:31:24.174: INFO: Scaling statefulset ss to 0
Apr 29 13:31:34.209: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:31:34.212: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:31:34.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1239" for this suite.

• [SLOW TEST:20.667 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":290,"completed":51,"skipped":704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:31:34.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Apr 29 13:31:42.361: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2233 PodName:pod-sharedvolume-b5813081-64b9-4c36-aedb-ca7e7165a83f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:31:42.361: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:31:42.406325       7 log.go:172] (0xc002e16d10) (0xc001baedc0) Create stream
I0429 13:31:42.406359       7 log.go:172] (0xc002e16d10) (0xc001baedc0) Stream added, broadcasting: 1
I0429 13:31:42.408618       7 log.go:172] (0xc002e16d10) Reply frame received for 1
I0429 13:31:42.408670       7 log.go:172] (0xc002e16d10) (0xc001b246e0) Create stream
I0429 13:31:42.408691       7 log.go:172] (0xc002e16d10) (0xc001b246e0) Stream added, broadcasting: 3
I0429 13:31:42.409885       7 log.go:172] (0xc002e16d10) Reply frame received for 3
I0429 13:31:42.409921       7 log.go:172] (0xc002e16d10) (0xc0011aa000) Create stream
I0429 13:31:42.409935       7 log.go:172] (0xc002e16d10) (0xc0011aa000) Stream added, broadcasting: 5
I0429 13:31:42.411049       7 log.go:172] (0xc002e16d10) Reply frame received for 5
I0429 13:31:42.505463       7 log.go:172] (0xc002e16d10) Data frame received for 5
I0429 13:31:42.505506       7 log.go:172] (0xc0011aa000) (5) Data frame handling
I0429 13:31:42.505532       7 log.go:172] (0xc002e16d10) Data frame received for 3
I0429 13:31:42.505545       7 log.go:172] (0xc001b246e0) (3) Data frame handling
I0429 13:31:42.505562       7 log.go:172] (0xc001b246e0) (3) Data frame sent
I0429 13:31:42.505575       7 log.go:172] (0xc002e16d10) Data frame received for 3
I0429 13:31:42.505587       7 log.go:172] (0xc001b246e0) (3) Data frame handling
I0429 13:31:42.507110       7 log.go:172] (0xc002e16d10) Data frame received for 1
I0429 13:31:42.507127       7 log.go:172] (0xc001baedc0) (1) Data frame handling
I0429 13:31:42.507139       7 log.go:172] (0xc001baedc0) (1) Data frame sent
I0429 13:31:42.507150       7 log.go:172] (0xc002e16d10) (0xc001baedc0) Stream removed, broadcasting: 1
I0429 13:31:42.507198       7 log.go:172] (0xc002e16d10) Go away received
I0429 13:31:42.507255       7 log.go:172] (0xc002e16d10) (0xc001baedc0) Stream removed, broadcasting: 1
I0429 13:31:42.507268       7 log.go:172] (0xc002e16d10) (0xc001b246e0) Stream removed, broadcasting: 3
I0429 13:31:42.507278       7 log.go:172] (0xc002e16d10) (0xc0011aa000) Stream removed, broadcasting: 5
Apr 29 13:31:42.507: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:31:42.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2233" for this suite.

• [SLOW TEST:8.285 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":290,"completed":52,"skipped":741,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:31:42.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating the pod
Apr 29 13:31:47.350: INFO: Successfully updated pod "annotationupdate40f2f92e-e304-4cd8-b525-f168bf608674"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:31:49.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7220" for this suite.

• [SLOW TEST:6.868 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":290,"completed":53,"skipped":742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:31:49.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Apr 29 13:31:49.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4096 /api/v1/namespaces/watch-4096/configmaps/e2e-watch-test-resource-version 3a1708a7-180f-45ac-bf0f-43da07578d11 63613 0 2020-04-29 13:31:49 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-04-29 13:31:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 13:31:49.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4096 /api/v1/namespaces/watch-4096/configmaps/e2e-watch-test-resource-version 3a1708a7-180f-45ac-bf0f-43da07578d11 63614 0 2020-04-29 13:31:49 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-04-29 13:31:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:31:49.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4096" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":290,"completed":54,"skipped":767,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:31:49.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Apr 29 13:31:49.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5667'
Apr 29 13:31:49.664: INFO: stderr: ""
Apr 29 13:31:49.664: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528
Apr 29 13:31:49.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5667'
Apr 29 13:32:03.790: INFO: stderr: ""
Apr 29 13:32:03.791: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:32:03.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5667" for this suite.

• [SLOW TEST:16.402 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":290,"completed":55,"skipped":791,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:32:05.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: set up a multi version CRD
Apr 29 13:32:08.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:32:23.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4123" for this suite.

• [SLOW TEST:17.903 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":290,"completed":56,"skipped":811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:32:23.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:32:24.944: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f" in namespace "security-context-test-614" to be "Succeeded or Failed"
Apr 29 13:32:24.970: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.897496ms
Apr 29 13:32:27.055: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110328836s
Apr 29 13:32:29.057: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113188033s
Apr 29 13:32:31.229: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284675037s
Apr 29 13:32:33.312: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.367733361s
Apr 29 13:32:33.312: INFO: Pod "alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:32:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-614" for this suite.

• [SLOW TEST:9.588 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":57,"skipped":835,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:32:33.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
Apr 29 13:32:33.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 13:32:33.705: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 13:32:33.708: INFO: 
Logging pods the apiserver thinks is on node kali-worker before test
Apr 29 13:32:33.713: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:32:33.714: INFO: 	Container kindnet-cni ready: true, restart count 1
Apr 29 13:32:33.714: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:32:33.714: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 13:32:33.714: INFO: 
Logging pods the apiserver thinks is on node kali-worker2 before test
Apr 29 13:32:33.718: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:32:33.718: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 13:32:33.718: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:32:33.718: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 13:32:33.718: INFO: alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f from security-context-test-614 started at 2020-04-29 13:32:24 +0000 UTC (1 container statuses recorded)
Apr 29 13:32:33.718: INFO: 	Container alpine-nnp-false-345ca7b8-23e9-4c4a-8e0a-499c1cf9ff6f ready: false, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1d9c0be7-4943-4763-8df7-52aaeb602f2d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1d9c0be7-4943-4763-8df7-52aaeb602f2d off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1d9c0be7-4943-4763-8df7-52aaeb602f2d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:32:44.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7128" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81

• [SLOW TEST:10.606 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":290,"completed":58,"skipped":847,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:32:44.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:32:44.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:32:46.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763964, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763964, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763964, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723763964, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:32:49.875: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:32:49.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8222-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:32:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1992" for this suite.
STEP: Destroying namespace "webhook-1992-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.053 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":290,"completed":59,"skipped":850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:32:51.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-4377
Apr 29 13:32:55.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Apr 29 13:32:55.403: INFO: stderr: "I0429 13:32:55.312590     709 log.go:172] (0xc000ace6e0) (0xc0006d4f00) Create stream\nI0429 13:32:55.312663     709 log.go:172] (0xc000ace6e0) (0xc0006d4f00) Stream added, broadcasting: 1\nI0429 13:32:55.316824     709 log.go:172] (0xc000ace6e0) Reply frame received for 1\nI0429 13:32:55.316869     709 log.go:172] (0xc000ace6e0) (0xc0006b4d20) Create stream\nI0429 13:32:55.316882     709 log.go:172] (0xc000ace6e0) (0xc0006b4d20) Stream added, broadcasting: 3\nI0429 13:32:55.318034     709 log.go:172] (0xc000ace6e0) Reply frame received for 3\nI0429 13:32:55.318092     709 log.go:172] (0xc000ace6e0) (0xc00061c280) Create stream\nI0429 13:32:55.318115     709 log.go:172] (0xc000ace6e0) (0xc00061c280) Stream added, broadcasting: 5\nI0429 13:32:55.319032     709 log.go:172] (0xc000ace6e0) Reply frame received for 5\nI0429 13:32:55.390575     709 log.go:172] (0xc000ace6e0) Data frame received for 5\nI0429 13:32:55.390604     709 log.go:172] (0xc00061c280) (5) Data frame handling\nI0429 13:32:55.390623     709 log.go:172] (0xc00061c280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0429 13:32:55.394977     709 log.go:172] (0xc000ace6e0) Data frame received for 3\nI0429 13:32:55.395006     709 log.go:172] (0xc0006b4d20) (3) Data frame handling\nI0429 13:32:55.395031     709 log.go:172] (0xc0006b4d20) (3) Data frame sent\nI0429 13:32:55.395561     709 log.go:172] (0xc000ace6e0) Data frame received for 3\nI0429 13:32:55.395589     709 log.go:172] (0xc0006b4d20) (3) Data frame handling\nI0429 13:32:55.395617     709 log.go:172] (0xc000ace6e0) Data frame received for 5\nI0429 13:32:55.395645     709 log.go:172] (0xc00061c280) (5) Data frame handling\nI0429 13:32:55.397976     709 log.go:172] (0xc000ace6e0) Data frame received for 1\nI0429 13:32:55.398012     709 log.go:172] (0xc0006d4f00) (1) Data frame handling\nI0429 13:32:55.398060     709 log.go:172] (0xc0006d4f00) (1) Data frame sent\nI0429 13:32:55.398089     709 log.go:172] (0xc000ace6e0) (0xc0006d4f00) Stream removed, broadcasting: 1\nI0429 13:32:55.398118     709 log.go:172] (0xc000ace6e0) Go away received\nI0429 13:32:55.398565     709 log.go:172] (0xc000ace6e0) (0xc0006d4f00) Stream removed, broadcasting: 1\nI0429 13:32:55.398596     709 log.go:172] (0xc000ace6e0) (0xc0006b4d20) Stream removed, broadcasting: 3\nI0429 13:32:55.398616     709 log.go:172] (0xc000ace6e0) (0xc00061c280) Stream removed, broadcasting: 5\n"
Apr 29 13:32:55.403: INFO: stdout: "iptables"
Apr 29 13:32:55.403: INFO: proxyMode: iptables
Apr 29 13:32:55.408: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:32:55.430: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:32:57.431: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:32:57.732: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:32:59.431: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:32:59.435: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:33:01.431: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:33:01.435: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:33:03.431: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:33:03.434: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:33:05.431: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:33:05.433: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating service affinity-clusterip-timeout in namespace services-4377
STEP: creating replication controller affinity-clusterip-timeout in namespace services-4377
I0429 13:33:05.480585       7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4377, replica count: 3
I0429 13:33:08.531050       7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:33:11.531286       7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:33:11.538: INFO: Creating new exec pod
Apr 29 13:33:16.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 execpod-affinityp9lmr -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80'
Apr 29 13:33:16.789: INFO: stderr: "I0429 13:33:16.702234     732 log.go:172] (0xc000450d10) (0xc0006dd4a0) Create stream\nI0429 13:33:16.702287     732 log.go:172] (0xc000450d10) (0xc0006dd4a0) Stream added, broadcasting: 1\nI0429 13:33:16.704173     732 log.go:172] (0xc000450d10) Reply frame received for 1\nI0429 13:33:16.704218     732 log.go:172] (0xc000450d10) (0xc0006f8e60) Create stream\nI0429 13:33:16.704232     732 log.go:172] (0xc000450d10) (0xc0006f8e60) Stream added, broadcasting: 3\nI0429 13:33:16.705591     732 log.go:172] (0xc000450d10) Reply frame received for 3\nI0429 13:33:16.705759     732 log.go:172] (0xc000450d10) (0xc000a5c140) Create stream\nI0429 13:33:16.705886     732 log.go:172] (0xc000450d10) (0xc000a5c140) Stream added, broadcasting: 5\nI0429 13:33:16.707437     732 log.go:172] (0xc000450d10) Reply frame received for 5\nI0429 13:33:16.782278     732 log.go:172] (0xc000450d10) Data frame received for 5\nI0429 13:33:16.782303     732 log.go:172] (0xc000a5c140) (5) Data frame handling\nI0429 13:33:16.782325     732 log.go:172] (0xc000a5c140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0429 13:33:16.782712     732 log.go:172] (0xc000450d10) Data frame received for 5\nI0429 13:33:16.782740     732 log.go:172] (0xc000a5c140) (5) Data frame handling\nI0429 13:33:16.782755     732 log.go:172] (0xc000a5c140) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0429 13:33:16.783176     732 log.go:172] (0xc000450d10) Data frame received for 3\nI0429 13:33:16.783211     732 log.go:172] (0xc0006f8e60) (3) Data frame handling\nI0429 13:33:16.783629     732 log.go:172] (0xc000450d10) Data frame received for 5\nI0429 13:33:16.783645     732 log.go:172] (0xc000a5c140) (5) Data frame handling\nI0429 13:33:16.785502     732 log.go:172] (0xc000450d10) Data frame received for 1\nI0429 13:33:16.785524     732 log.go:172] (0xc0006dd4a0) (1) Data frame handling\nI0429 13:33:16.785538     732 log.go:172] (0xc0006dd4a0) (1) Data frame sent\nI0429 13:33:16.785551     732 log.go:172] (0xc000450d10) (0xc0006dd4a0) Stream removed, broadcasting: 1\nI0429 13:33:16.785824     732 log.go:172] (0xc000450d10) (0xc0006dd4a0) Stream removed, broadcasting: 1\nI0429 13:33:16.785851     732 log.go:172] (0xc000450d10) (0xc0006f8e60) Stream removed, broadcasting: 3\nI0429 13:33:16.785865     732 log.go:172] (0xc000450d10) (0xc000a5c140) Stream removed, broadcasting: 5\n"
Apr 29 13:33:16.789: INFO: stdout: ""
Apr 29 13:33:16.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 execpod-affinityp9lmr -- /bin/sh -x -c nc -zv -t -w 2 10.102.131.171 80'
Apr 29 13:33:17.006: INFO: stderr: "I0429 13:33:16.932722     752 log.go:172] (0xc000b7d4a0) (0xc000a0a6e0) Create stream\nI0429 13:33:16.932773     752 log.go:172] (0xc000b7d4a0) (0xc000a0a6e0) Stream added, broadcasting: 1\nI0429 13:33:16.937958     752 log.go:172] (0xc000b7d4a0) Reply frame received for 1\nI0429 13:33:16.938032     752 log.go:172] (0xc000b7d4a0) (0xc00055e5a0) Create stream\nI0429 13:33:16.938066     752 log.go:172] (0xc000b7d4a0) (0xc00055e5a0) Stream added, broadcasting: 3\nI0429 13:33:16.939045     752 log.go:172] (0xc000b7d4a0) Reply frame received for 3\nI0429 13:33:16.939081     752 log.go:172] (0xc000b7d4a0) (0xc000444dc0) Create stream\nI0429 13:33:16.939093     752 log.go:172] (0xc000b7d4a0) (0xc000444dc0) Stream added, broadcasting: 5\nI0429 13:33:16.939947     752 log.go:172] (0xc000b7d4a0) Reply frame received for 5\nI0429 13:33:16.996875     752 log.go:172] (0xc000b7d4a0) Data frame received for 5\nI0429 13:33:16.996911     752 log.go:172] (0xc000444dc0) (5) Data frame handling\nI0429 13:33:16.996932     752 log.go:172] (0xc000444dc0) (5) Data frame sent\nI0429 13:33:16.996942     752 log.go:172] (0xc000b7d4a0) Data frame received for 5\n+ nc -zv -t -w 2 10.102.131.171 80\nConnection to 10.102.131.171 80 port [tcp/http] succeeded!\nI0429 13:33:16.996973     752 log.go:172] (0xc000b7d4a0) Data frame received for 3\nI0429 13:33:16.997033     752 log.go:172] (0xc00055e5a0) (3) Data frame handling\nI0429 13:33:16.997070     752 log.go:172] (0xc000444dc0) (5) Data frame handling\nI0429 13:33:17.001695     752 log.go:172] (0xc000b7d4a0) Data frame received for 1\nI0429 13:33:17.001739     752 log.go:172] (0xc000a0a6e0) (1) Data frame handling\nI0429 13:33:17.001784     752 log.go:172] (0xc000a0a6e0) (1) Data frame sent\nI0429 13:33:17.001811     752 log.go:172] (0xc000b7d4a0) (0xc000a0a6e0) Stream removed, broadcasting: 1\nI0429 13:33:17.001855     752 log.go:172] (0xc000b7d4a0) Go away received\nI0429 13:33:17.002306     752 log.go:172] (0xc000b7d4a0) (0xc000a0a6e0) Stream removed, broadcasting: 1\nI0429 13:33:17.002343     752 log.go:172] (0xc000b7d4a0) (0xc00055e5a0) Stream removed, broadcasting: 3\nI0429 13:33:17.002356     752 log.go:172] (0xc000b7d4a0) (0xc000444dc0) Stream removed, broadcasting: 5\n"
Apr 29 13:33:17.006: INFO: stdout: ""
Apr 29 13:33:17.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 execpod-affinityp9lmr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.131.171:80/ ; done'
Apr 29 13:33:17.295: INFO: stderr: "I0429 13:33:17.140528     774 log.go:172] (0xc00003a4d0) (0xc0002fd400) Create stream\nI0429 13:33:17.140579     774 log.go:172] (0xc00003a4d0) (0xc0002fd400) Stream added, broadcasting: 1\nI0429 13:33:17.143009     774 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0429 13:33:17.143074     774 log.go:172] (0xc00003a4d0) (0xc0000dd040) Create stream\nI0429 13:33:17.143094     774 log.go:172] (0xc00003a4d0) (0xc0000dd040) Stream added, broadcasting: 3\nI0429 13:33:17.144257     774 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0429 13:33:17.144312     774 log.go:172] (0xc00003a4d0) (0xc00036e6e0) Create stream\nI0429 13:33:17.144336     774 log.go:172] (0xc00003a4d0) (0xc00036e6e0) Stream added, broadcasting: 5\nI0429 13:33:17.145719     774 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0429 13:33:17.202688     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.202741     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.202759     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.202790     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.202801     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.202820     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.210781     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.210813     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.210839     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.212613     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.212654     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.212700     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.213958     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.213975     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.213986     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.218111     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.218127     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.218145     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.218689     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.218732     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.218747     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.218769     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.218780     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.218790     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.226571     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.226599     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.226611     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.226941     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.226960     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.226973     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.226992     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.227002     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.227013     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.230693     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.230714     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.230726     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.231124     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.231144     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.231157     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.231174     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.231193     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.231238     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.234984     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.234997     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.235010     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.235280     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.235295     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.235300     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.235309     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.235316     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.235324     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.239349     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.239365     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.239376     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.239730     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.239758     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.239807     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.239844     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.239866     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.239880     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.245638     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.245668     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.245681     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.246103     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.246133     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.246155     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.246177     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.246187     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.246196     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.250924     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.250945     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.250958     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.251336     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.251357     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.251382     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.251394     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.251409     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.251423     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.255395     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.255411     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.255433     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.255813     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.255837     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.255855     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.255878     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.255888     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.255900     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\nI0429 13:33:17.259194     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.259208     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.259214     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.259482     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.259496     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.259507     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.259524     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.259533     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.259541     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.263707     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.263725     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.263733     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.264160     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.264195     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.264215     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.264240     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.264259     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.264283     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curlI0429 13:33:17.264298     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.264312     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.264335     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.269856     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.269949     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.270002     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.272055     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.272129     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.272154     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\nI0429 13:33:17.272165     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.272172     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\n+ echoI0429 13:33:17.272190     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\nI0429 13:33:17.272301     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.272325     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.272352     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n\nI0429 13:33:17.272615     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.272639     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.272646     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.272653     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.272662     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.272668     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.277004     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.277018     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.277028     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.277718     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.277769     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.277785     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.277795     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.277815     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.277832     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.281615     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.281636     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.281654     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.281841     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.281861     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.281872     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.281886     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.281895     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.281903     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.285390     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.285405     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.285425     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.285660     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.285671     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.285681     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.285698     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.285711     774 log.go:172] (0xc00036e6e0) (5) Data frame sent\nI0429 13:33:17.285731     774 log.go:172] (0xc0000dd040) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.288920     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.288933     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.288946     774 log.go:172] (0xc0000dd040) (3) Data frame sent\nI0429 13:33:17.289561     774 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0429 13:33:17.289579     774 log.go:172] (0xc0000dd040) (3) Data frame handling\nI0429 13:33:17.289683     774 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0429 13:33:17.289696     774 log.go:172] (0xc00036e6e0) (5) Data frame handling\nI0429 13:33:17.291136     774 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0429 13:33:17.291154     774 log.go:172] (0xc0002fd400) (1) Data frame handling\nI0429 13:33:17.291168     774 log.go:172] (0xc0002fd400) (1) Data frame sent\nI0429 13:33:17.291182     774 log.go:172] (0xc00003a4d0) (0xc0002fd400) Stream removed, broadcasting: 1\nI0429 13:33:17.291197     774 log.go:172] (0xc00003a4d0) Go away received\nI0429 13:33:17.291478     774 log.go:172] (0xc00003a4d0) (0xc0002fd400) Stream removed, broadcasting: 1\nI0429 13:33:17.291493     774 log.go:172] (0xc00003a4d0) (0xc0000dd040) Stream removed, broadcasting: 3\nI0429 13:33:17.291501     774 log.go:172] (0xc00003a4d0) (0xc00036e6e0) Stream removed, broadcasting: 5\n"
Apr 29 13:33:17.295: INFO: stdout: "\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql\naffinity-clusterip-timeout-dfcql"
Apr 29 13:33:17.296: INFO: Received response from host: 
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Received response from host: affinity-clusterip-timeout-dfcql
Apr 29 13:33:17.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 execpod-affinityp9lmr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.131.171:80/'
Apr 29 13:33:17.488: INFO: stderr: "I0429 13:33:17.414929     795 log.go:172] (0xc000c83080) (0xc000b34640) Create stream\nI0429 13:33:17.414987     795 log.go:172] (0xc000c83080) (0xc000b34640) Stream added, broadcasting: 1\nI0429 13:33:17.418399     795 log.go:172] (0xc000c83080) Reply frame received for 1\nI0429 13:33:17.418436     795 log.go:172] (0xc000c83080) (0xc0004d6e60) Create stream\nI0429 13:33:17.418445     795 log.go:172] (0xc000c83080) (0xc0004d6e60) Stream added, broadcasting: 3\nI0429 13:33:17.419229     795 log.go:172] (0xc000c83080) Reply frame received for 3\nI0429 13:33:17.419254     795 log.go:172] (0xc000c83080) (0xc00036fd60) Create stream\nI0429 13:33:17.419264     795 log.go:172] (0xc000c83080) (0xc00036fd60) Stream added, broadcasting: 5\nI0429 13:33:17.419894     795 log.go:172] (0xc000c83080) Reply frame received for 5\nI0429 13:33:17.478183     795 log.go:172] (0xc000c83080) Data frame received for 5\nI0429 13:33:17.478206     795 log.go:172] (0xc00036fd60) (5) Data frame handling\nI0429 13:33:17.478221     795 log.go:172] (0xc00036fd60) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:17.481329     795 log.go:172] (0xc000c83080) Data frame received for 3\nI0429 13:33:17.481368     795 log.go:172] (0xc0004d6e60) (3) Data frame handling\nI0429 13:33:17.481388     795 log.go:172] (0xc0004d6e60) (3) Data frame sent\nI0429 13:33:17.481753     795 log.go:172] (0xc000c83080) Data frame received for 3\nI0429 13:33:17.481781     795 log.go:172] (0xc0004d6e60) (3) Data frame handling\nI0429 13:33:17.481801     795 log.go:172] (0xc000c83080) Data frame received for 5\nI0429 13:33:17.481823     795 log.go:172] (0xc00036fd60) (5) Data frame handling\nI0429 13:33:17.483846     795 log.go:172] (0xc000c83080) Data frame received for 1\nI0429 13:33:17.483874     795 log.go:172] (0xc000b34640) (1) Data frame handling\nI0429 13:33:17.483893     795 log.go:172] (0xc000b34640) (1) Data frame sent\nI0429 13:33:17.483932     795 log.go:172] (0xc000c83080) (0xc000b34640) Stream removed, broadcasting: 1\nI0429 13:33:17.484110     795 log.go:172] (0xc000c83080) Go away received\nI0429 13:33:17.484330     795 log.go:172] (0xc000c83080) (0xc000b34640) Stream removed, broadcasting: 1\nI0429 13:33:17.484348     795 log.go:172] (0xc000c83080) (0xc0004d6e60) Stream removed, broadcasting: 3\nI0429 13:33:17.484359     795 log.go:172] (0xc000c83080) (0xc00036fd60) Stream removed, broadcasting: 5\n"
Apr 29 13:33:17.488: INFO: stdout: "affinity-clusterip-timeout-dfcql"
Apr 29 13:33:32.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4377 execpod-affinityp9lmr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.131.171:80/'
Apr 29 13:33:32.725: INFO: stderr: "I0429 13:33:32.622602     816 log.go:172] (0xc0000e8630) (0xc0003514a0) Create stream\nI0429 13:33:32.622671     816 log.go:172] (0xc0000e8630) (0xc0003514a0) Stream added, broadcasting: 1\nI0429 13:33:32.625898     816 log.go:172] (0xc0000e8630) Reply frame received for 1\nI0429 13:33:32.625964     816 log.go:172] (0xc0000e8630) (0xc0003f8780) Create stream\nI0429 13:33:32.625985     816 log.go:172] (0xc0000e8630) (0xc0003f8780) Stream added, broadcasting: 3\nI0429 13:33:32.627041     816 log.go:172] (0xc0000e8630) Reply frame received for 3\nI0429 13:33:32.627077     816 log.go:172] (0xc0000e8630) (0xc000351c20) Create stream\nI0429 13:33:32.627089     816 log.go:172] (0xc0000e8630) (0xc000351c20) Stream added, broadcasting: 5\nI0429 13:33:32.628168     816 log.go:172] (0xc0000e8630) Reply frame received for 5\nI0429 13:33:32.715917     816 log.go:172] (0xc0000e8630) Data frame received for 5\nI0429 13:33:32.715944     816 log.go:172] (0xc000351c20) (5) Data frame handling\nI0429 13:33:32.715969     816 log.go:172] (0xc000351c20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.131.171:80/\nI0429 13:33:32.718378     816 log.go:172] (0xc0000e8630) Data frame received for 3\nI0429 13:33:32.718400     816 log.go:172] (0xc0003f8780) (3) Data frame handling\nI0429 13:33:32.718415     816 log.go:172] (0xc0003f8780) (3) Data frame sent\nI0429 13:33:32.719278     816 log.go:172] (0xc0000e8630) Data frame received for 3\nI0429 13:33:32.719367     816 log.go:172] (0xc0003f8780) (3) Data frame handling\nI0429 13:33:32.719403     816 log.go:172] (0xc0000e8630) Data frame received for 5\nI0429 13:33:32.719417     816 log.go:172] (0xc000351c20) (5) Data frame handling\nI0429 13:33:32.721286     816 log.go:172] (0xc0000e8630) Data frame received for 1\nI0429 13:33:32.721320     816 log.go:172] (0xc0003514a0) (1) Data frame handling\nI0429 13:33:32.721346     816 log.go:172] (0xc0003514a0) (1) Data frame sent\nI0429 13:33:32.721370     816 log.go:172] (0xc0000e8630) (0xc0003514a0) Stream removed, broadcasting: 1\nI0429 13:33:32.721391     816 log.go:172] (0xc0000e8630) Go away received\nI0429 13:33:32.721725     816 log.go:172] (0xc0000e8630) (0xc0003514a0) Stream removed, broadcasting: 1\nI0429 13:33:32.721745     816 log.go:172] (0xc0000e8630) (0xc0003f8780) Stream removed, broadcasting: 3\nI0429 13:33:32.721756     816 log.go:172] (0xc0000e8630) (0xc000351c20) Stream removed, broadcasting: 5\n"
Apr 29 13:33:32.725: INFO: stdout: "affinity-clusterip-timeout-nnwl4"
Apr 29 13:33:32.725: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4377, will wait for the garbage collector to delete the pods
Apr 29 13:33:33.107: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 225.627357ms
Apr 29 13:33:33.507: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.28441ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:33:40.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4377" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:49.293 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":60,"skipped":875,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:33:40.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Apr 29 13:33:40.517: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 13:33:43.472: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:33:53.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-640" for this suite.

• [SLOW TEST:12.743 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":290,"completed":61,"skipped":917,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:33:53.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:33:53.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:33:57.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9237" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":290,"completed":62,"skipped":923,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:33:57.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:33:57.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4377" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":290,"completed":63,"skipped":952,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:33:57.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Apr 29 13:33:57.913: INFO: Waiting up to 5m0s for pod "pod-44526908-2b30-428e-a000-bbd49784f9a9" in namespace "emptydir-5134" to be "Succeeded or Failed"
Apr 29 13:33:57.931: INFO: Pod "pod-44526908-2b30-428e-a000-bbd49784f9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.132396ms
Apr 29 13:33:59.999: INFO: Pod "pod-44526908-2b30-428e-a000-bbd49784f9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086014516s
Apr 29 13:34:02.004: INFO: Pod "pod-44526908-2b30-428e-a000-bbd49784f9a9": Phase="Running", Reason="", readiness=true. Elapsed: 4.090737159s
Apr 29 13:34:04.008: INFO: Pod "pod-44526908-2b30-428e-a000-bbd49784f9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094382521s
STEP: Saw pod success
Apr 29 13:34:04.008: INFO: Pod "pod-44526908-2b30-428e-a000-bbd49784f9a9" satisfied condition "Succeeded or Failed"
Apr 29 13:34:04.011: INFO: Trying to get logs from node kali-worker pod pod-44526908-2b30-428e-a000-bbd49784f9a9 container test-container: 
STEP: delete the pod
Apr 29 13:34:04.046: INFO: Waiting for pod pod-44526908-2b30-428e-a000-bbd49784f9a9 to disappear
Apr 29 13:34:04.050: INFO: Pod pod-44526908-2b30-428e-a000-bbd49784f9a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:34:04.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5134" for this suite.

• [SLOW TEST:6.350 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":64,"skipped":952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:34:04.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Apr 29 13:34:14.258: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:14.319: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 13:34:16.319: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:16.323: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 13:34:18.319: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:18.323: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 13:34:20.319: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:20.324: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 13:34:22.319: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:22.323: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 13:34:24.319: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 13:34:24.323: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:34:24.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2583" for this suite.

• [SLOW TEST:20.275 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":290,"completed":65,"skipped":980,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:34:24.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Apr 29 13:34:24.410: INFO: Waiting up to 5m0s for pod "pod-87407c7c-a558-4001-92a0-aae6cbadebe2" in namespace "emptydir-3533" to be "Succeeded or Failed"
Apr 29 13:34:24.445: INFO: Pod "pod-87407c7c-a558-4001-92a0-aae6cbadebe2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.120207ms
Apr 29 13:34:26.449: INFO: Pod "pod-87407c7c-a558-4001-92a0-aae6cbadebe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039529652s
Apr 29 13:34:28.490: INFO: Pod "pod-87407c7c-a558-4001-92a0-aae6cbadebe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080768609s
STEP: Saw pod success
Apr 29 13:34:28.491: INFO: Pod "pod-87407c7c-a558-4001-92a0-aae6cbadebe2" satisfied condition "Succeeded or Failed"
Apr 29 13:34:28.493: INFO: Trying to get logs from node kali-worker2 pod pod-87407c7c-a558-4001-92a0-aae6cbadebe2 container test-container: 
STEP: delete the pod
Apr 29 13:34:28.526: INFO: Waiting for pod pod-87407c7c-a558-4001-92a0-aae6cbadebe2 to disappear
Apr 29 13:34:28.535: INFO: Pod pod-87407c7c-a558-4001-92a0-aae6cbadebe2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:34:28.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3533" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":66,"skipped":986,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:34:28.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:34:28.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Apr 29 13:34:29.380: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:29Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:29Z]] name:name1 resourceVersion:64528 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0f5edeca-1068-4067-b8cc-034540a07615] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Apr 29 13:34:39.427: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:39Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:39Z]] name:name2 resourceVersion:64588 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b42e8ca4-027d-440c-a46e-b39e74459517] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Apr 29 13:34:49.434: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:49Z]] name:name1 resourceVersion:64619 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0f5edeca-1068-4067-b8cc-034540a07615] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Apr 29 13:34:59.442: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:59Z]] name:name2 resourceVersion:64649 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b42e8ca4-027d-440c-a46e-b39e74459517] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Apr 29 13:35:09.450: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:29Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:49Z]] name:name1 resourceVersion:64679 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0f5edeca-1068-4067-b8cc-034540a07615] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Apr 29 13:35:19.460: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-29T13:34:39Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-04-29T13:34:59Z]] name:name2 resourceVersion:64709 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b42e8ca4-027d-440c-a46e-b39e74459517] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:35:29.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5195" for this suite.

• [SLOW TEST:61.363 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":290,"completed":67,"skipped":999,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:35:29.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:35:34.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-923" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":290,"completed":68,"skipped":1000,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:35:34.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: getting the auto-created API token
STEP: reading a file in the container
Apr 29 13:35:38.724: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6248 pod-service-account-c89b4738-2d92-4704-9a64-b6abb10df136 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Apr 29 13:35:38.939: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6248 pod-service-account-c89b4738-2d92-4704-9a64-b6abb10df136 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Apr 29 13:35:39.174: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6248 pod-service-account-c89b4738-2d92-4704-9a64-b6abb10df136 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:35:39.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6248" for this suite.

• [SLOW TEST:5.282 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":290,"completed":69,"skipped":1012,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:35:39.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:35:43.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7459" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":290,"completed":70,"skipped":1019,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:35:44.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: waiting for pod running
STEP: creating a file in subpath
Apr 29 13:35:48.255: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1486 PodName:var-expansion-78c9168a-3f0f-4aaf-9b0d-4fc20b8be42b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:35:48.255: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:35:48.277610       7 log.go:172] (0xc002e70790) (0xc0014ad900) Create stream
I0429 13:35:48.277638       7 log.go:172] (0xc002e70790) (0xc0014ad900) Stream added, broadcasting: 1
I0429 13:35:48.279027       7 log.go:172] (0xc002e70790) Reply frame received for 1
I0429 13:35:48.279065       7 log.go:172] (0xc002e70790) (0xc000b7d040) Create stream
I0429 13:35:48.279080       7 log.go:172] (0xc002e70790) (0xc000b7d040) Stream added, broadcasting: 3
I0429 13:35:48.279732       7 log.go:172] (0xc002e70790) Reply frame received for 3
I0429 13:35:48.279757       7 log.go:172] (0xc002e70790) (0xc001bae820) Create stream
I0429 13:35:48.279768       7 log.go:172] (0xc002e70790) (0xc001bae820) Stream added, broadcasting: 5
I0429 13:35:48.280297       7 log.go:172] (0xc002e70790) Reply frame received for 5
I0429 13:35:48.341385       7 log.go:172] (0xc002e70790) Data frame received for 3
I0429 13:35:48.341413       7 log.go:172] (0xc000b7d040) (3) Data frame handling
I0429 13:35:48.341703       7 log.go:172] (0xc002e70790) Data frame received for 5
I0429 13:35:48.341769       7 log.go:172] (0xc001bae820) (5) Data frame handling
I0429 13:35:48.344672       7 log.go:172] (0xc002e70790) Data frame received for 1
I0429 13:35:48.344691       7 log.go:172] (0xc0014ad900) (1) Data frame handling
I0429 13:35:48.344712       7 log.go:172] (0xc0014ad900) (1) Data frame sent
I0429 13:35:48.344729       7 log.go:172] (0xc002e70790) (0xc0014ad900) Stream removed, broadcasting: 1
I0429 13:35:48.344764       7 log.go:172] (0xc002e70790) Go away received
I0429 13:35:48.344848       7 log.go:172] (0xc002e70790) (0xc0014ad900) Stream removed, broadcasting: 1
I0429 13:35:48.344875       7 log.go:172] (0xc002e70790) (0xc000b7d040) Stream removed, broadcasting: 3
I0429 13:35:48.344884       7 log.go:172] (0xc002e70790) (0xc001bae820) Stream removed, broadcasting: 5
STEP: test for file in mounted path
Apr 29 13:35:48.347: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1486 PodName:var-expansion-78c9168a-3f0f-4aaf-9b0d-4fc20b8be42b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:35:48.347: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:35:48.372032       7 log.go:172] (0xc002e16370) (0xc001baf0e0) Create stream
I0429 13:35:48.372059       7 log.go:172] (0xc002e16370) (0xc001baf0e0) Stream added, broadcasting: 1
I0429 13:35:48.373767       7 log.go:172] (0xc002e16370) Reply frame received for 1
I0429 13:35:48.373805       7 log.go:172] (0xc002e16370) (0xc001baf180) Create stream
I0429 13:35:48.373818       7 log.go:172] (0xc002e16370) (0xc001baf180) Stream added, broadcasting: 3
I0429 13:35:48.374539       7 log.go:172] (0xc002e16370) Reply frame received for 3
I0429 13:35:48.374563       7 log.go:172] (0xc002e16370) (0xc000b7d400) Create stream
I0429 13:35:48.374571       7 log.go:172] (0xc002e16370) (0xc000b7d400) Stream added, broadcasting: 5
I0429 13:35:48.375095       7 log.go:172] (0xc002e16370) Reply frame received for 5
I0429 13:35:48.425957       7 log.go:172] (0xc002e16370) Data frame received for 5
I0429 13:35:48.425994       7 log.go:172] (0xc000b7d400) (5) Data frame handling
I0429 13:35:48.426016       7 log.go:172] (0xc002e16370) Data frame received for 3
I0429 13:35:48.426025       7 log.go:172] (0xc001baf180) (3) Data frame handling
I0429 13:35:48.426797       7 log.go:172] (0xc002e16370) Data frame received for 1
I0429 13:35:48.426818       7 log.go:172] (0xc001baf0e0) (1) Data frame handling
I0429 13:35:48.426834       7 log.go:172] (0xc001baf0e0) (1) Data frame sent
I0429 13:35:48.426847       7 log.go:172] (0xc002e16370) (0xc001baf0e0) Stream removed, broadcasting: 1
I0429 13:35:48.426865       7 log.go:172] (0xc002e16370) Go away received
I0429 13:35:48.426918       7 log.go:172] (0xc002e16370) (0xc001baf0e0) Stream removed, broadcasting: 1
I0429 13:35:48.426930       7 log.go:172] (0xc002e16370) (0xc001baf180) Stream removed, broadcasting: 3
I0429 13:35:48.426939       7 log.go:172] (0xc002e16370) (0xc000b7d400) Stream removed, broadcasting: 5
STEP: updating the annotation value
Apr 29 13:35:48.935: INFO: Successfully updated pod "var-expansion-78c9168a-3f0f-4aaf-9b0d-4fc20b8be42b"
STEP: waiting for annotated pod running
STEP: deleting the pod gracefully
Apr 29 13:35:48.977: INFO: Deleting pod "var-expansion-78c9168a-3f0f-4aaf-9b0d-4fc20b8be42b" in namespace "var-expansion-1486"
Apr 29 13:35:48.981: INFO: Wait up to 5m0s for pod "var-expansion-78c9168a-3f0f-4aaf-9b0d-4fc20b8be42b" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:36:35.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1486" for this suite.

• [SLOW TEST:51.020 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":290,"completed":71,"skipped":1024,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:36:35.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:36:35.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Apr 29 13:36:38.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9111 create -f -'
Apr 29 13:36:41.583: INFO: stderr: ""
Apr 29 13:36:41.583: INFO: stdout: "e2e-test-crd-publish-openapi-4906-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Apr 29 13:36:41.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9111 delete e2e-test-crd-publish-openapi-4906-crds test-cr'
Apr 29 13:36:41.692: INFO: stderr: ""
Apr 29 13:36:41.692: INFO: stdout: "e2e-test-crd-publish-openapi-4906-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Apr 29 13:36:41.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9111 apply -f -'
Apr 29 13:36:41.986: INFO: stderr: ""
Apr 29 13:36:41.986: INFO: stdout: "e2e-test-crd-publish-openapi-4906-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Apr 29 13:36:41.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9111 delete e2e-test-crd-publish-openapi-4906-crds test-cr'
Apr 29 13:36:42.123: INFO: stderr: ""
Apr 29 13:36:42.123: INFO: stdout: "e2e-test-crd-publish-openapi-4906-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Apr 29 13:36:42.123: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4906-crds'
Apr 29 13:36:42.336: INFO: stderr: ""
Apr 29 13:36:42.336: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4906-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:36:44.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9111" for this suite.

• [SLOW TEST:9.264 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":290,"completed":72,"skipped":1028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:36:44.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:36:44.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729" in namespace "projected-2339" to be "Succeeded or Failed"
Apr 29 13:36:44.402: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208695ms
Apr 29 13:36:46.955: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5611726s
Apr 29 13:36:48.973: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578923028s
Apr 29 13:36:51.153: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759513024s
Apr 29 13:36:53.158: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.764002238s
STEP: Saw pod success
Apr 29 13:36:53.158: INFO: Pod "downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729" satisfied condition "Succeeded or Failed"
Apr 29 13:36:53.161: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729 container client-container: 
STEP: delete the pod
Apr 29 13:36:53.219: INFO: Waiting for pod downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729 to disappear
Apr 29 13:36:53.236: INFO: Pod downwardapi-volume-5481d177-acad-4dda-82f3-f01b1340a729 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:36:53.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2339" for this suite.

• [SLOW TEST:8.918 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":73,"skipped":1052,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:36:53.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:36:53.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
Apr 29 13:36:53.561: INFO: stderr: ""
Apr 29 13:36:53.561: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.2.226+0c3c2cd6ac8c9f\", GitCommit:\"0c3c2cd6ac8c9ffefc38f9746034e546331b9cd6\", GitTreeState:\"clean\", BuildDate:\"2020-04-29T10:51:39Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:36:53.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5978" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":290,"completed":74,"skipped":1060,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:36:53.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-2830
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating statefulset ss in namespace statefulset-2830
Apr 29 13:36:53.768: INFO: Found 0 stateful pods, waiting for 1
Apr 29 13:37:03.773: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Apr 29 13:37:07.961: INFO: Deleting all statefulset in ns statefulset-2830
Apr 29 13:37:08.038: INFO: Scaling statefulset ss to 0
Apr 29 13:37:18.166: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 13:37:18.169: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:37:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2830" for this suite.

• [SLOW TEST:24.596 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":290,"completed":75,"skipped":1073,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:37:18.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-4f9da28b-3eb5-43ab-88f2-0190b9542bf8
STEP: Creating a pod to test consume configMaps
Apr 29 13:37:18.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5" in namespace "configmap-5325" to be "Succeeded or Failed"
Apr 29 13:37:18.357: INFO: Pod "pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.436981ms
Apr 29 13:37:20.361: INFO: Pod "pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019595872s
Apr 29 13:37:22.364: INFO: Pod "pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022765372s
STEP: Saw pod success
Apr 29 13:37:22.364: INFO: Pod "pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5" satisfied condition "Succeeded or Failed"
Apr 29 13:37:22.367: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5 container configmap-volume-test: 
STEP: delete the pod
Apr 29 13:37:22.451: INFO: Waiting for pod pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5 to disappear
Apr 29 13:37:22.527: INFO: Pod pod-configmaps-2d1c2683-2294-4ec7-a25e-7794ee9c5ab5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:37:22.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5325" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":290,"completed":76,"skipped":1077,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:37:22.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:37:33.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6943" for this suite.

• [SLOW TEST:11.259 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":290,"completed":77,"skipped":1079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:37:33.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:37:34.567: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:37:36.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 13:37:38.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764254, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:37:41.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:37:41.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5173" for this suite.
STEP: Destroying namespace "webhook-5173-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.974 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":290,"completed":78,"skipped":1120,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:37:41.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:37:45.045: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Apr 29 13:37:47.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764263, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 13:37:49.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764265, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764263, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:37:52.375: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:37:52.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:37:53.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6170" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:11.688 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":290,"completed":79,"skipped":1124,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:37:53.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating the pod
Apr 29 13:37:58.802: INFO: Successfully updated pod "labelsupdate0a75873a-6a5a-4029-8827-8febbf94d207"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:38:00.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6616" for this suite.

• [SLOW TEST:7.231 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":290,"completed":80,"skipped":1131,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:38:00.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:38:00.956: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Apr 29 13:38:05.960: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 29 13:38:05.960: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
Apr 29 13:38:06.003: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-744 /apis/apps/v1/namespaces/deployment-744/deployments/test-cleanup-deployment 50acdb95-f27c-486a-a1ce-1a2087549bf1 65732 1 2020-04-29 13:38:05 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-04-29 13:38:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b45db8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Apr 29 13:38:06.082: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694  deployment-744 /apis/apps/v1/namespaces/deployment-744/replicasets/test-cleanup-deployment-6688745694 84311567-a68d-436a-8d3b-f61cbf5515bf 65739 1 2020-04-29 13:38:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 50acdb95-f27c-486a-a1ce-1a2087549bf1 0xc0037f6897 0xc0037f6898}] []  [{kube-controller-manager Update apps/v1 2020-04-29 13:38:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50acdb95-f27c-486a-a1ce-1a2087549bf1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037f6958  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 13:38:06.082: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Apr 29 13:38:06.082: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-744 /apis/apps/v1/namespaces/deployment-744/replicasets/test-cleanup-controller 243b5b44-519e-4a87-9783-c5da4bbee969 65733 1 2020-04-29 13:38:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 50acdb95-f27c-486a-a1ce-1a2087549bf1 0xc0037f6607 0xc0037f6608}] []  [{e2e.test Update apps/v1 2020-04-29 13:38:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 13:38:05 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"50acdb95-f27c-486a-a1ce-1a2087549bf1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0037f67b8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Apr 29 13:38:06.115: INFO: Pod "test-cleanup-controller-4jw7r" is available:
&Pod{ObjectMeta:{test-cleanup-controller-4jw7r test-cleanup-controller- deployment-744 /api/v1/namespaces/deployment-744/pods/test-cleanup-controller-4jw7r 7bbe7033-c8c5-4fa6-9e5c-b2969fd2482d 65721 0 2020-04-29 13:38:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 243b5b44-519e-4a87-9783-c5da4bbee969 0xc0037f6ff7 0xc0037f6ff8}] []  [{kube-controller-manager Update v1 2020-04-29 13:38:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"243b5b44-519e-4a87-9783-c5da4bbee969\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 13:38:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7rctl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7rctl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7rctl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:38:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:38:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:38:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:38:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.68,StartTime:2020-04-29 13:38:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 13:38:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://323fd9a4d83c9a898ad463d159770d189852f895f2625b0753891783595ef6dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 13:38:06.116: INFO: Pod "test-cleanup-deployment-6688745694-hldnc" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-hldnc test-cleanup-deployment-6688745694- deployment-744 /api/v1/namespaces/deployment-744/pods/test-cleanup-deployment-6688745694-hldnc b3a7988f-62ab-4a23-87c6-91755ea98a41 65740 0 2020-04-29 13:38:05 +0000 UTC   map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 84311567-a68d-436a-8d3b-f61cbf5515bf 0xc0037f7327 0xc0037f7328}] []  [{kube-controller-manager Update v1 2020-04-29 13:38:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"84311567-a68d-436a-8d3b-f61cbf5515bf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7rctl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7rctl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7rctl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 13:38:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:38:06.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-744" for this suite.

• [SLOW TEST:5.298 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":290,"completed":81,"skipped":1134,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:38:06.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:38:06.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6738" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":290,"completed":82,"skipped":1147,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:38:06.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a replication controller
Apr 29 13:38:06.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6797'
Apr 29 13:38:06.893: INFO: stderr: ""
Apr 29 13:38:06.893: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 13:38:06.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:07.350: INFO: stderr: ""
Apr 29 13:38:07.350: INFO: stdout: "update-demo-nautilus-7q89l update-demo-nautilus-fg7rd "
Apr 29 13:38:07.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7q89l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:07.634: INFO: stderr: ""
Apr 29 13:38:07.634: INFO: stdout: ""
Apr 29 13:38:07.634: INFO: update-demo-nautilus-7q89l is created but not running
Apr 29 13:38:12.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:12.728: INFO: stderr: ""
Apr 29 13:38:12.728: INFO: stdout: "update-demo-nautilus-7q89l update-demo-nautilus-fg7rd "
Apr 29 13:38:12.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7q89l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:12.820: INFO: stderr: ""
Apr 29 13:38:12.820: INFO: stdout: "true"
Apr 29 13:38:12.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7q89l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:12.930: INFO: stderr: ""
Apr 29 13:38:12.931: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:12.931: INFO: validating pod update-demo-nautilus-7q89l
Apr 29 13:38:12.935: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:12.935: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:12.935: INFO: update-demo-nautilus-7q89l is verified up and running
Apr 29 13:38:12.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:13.042: INFO: stderr: ""
Apr 29 13:38:13.043: INFO: stdout: "true"
Apr 29 13:38:13.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:13.143: INFO: stderr: ""
Apr 29 13:38:13.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:13.143: INFO: validating pod update-demo-nautilus-fg7rd
Apr 29 13:38:13.146: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:13.147: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:13.147: INFO: update-demo-nautilus-fg7rd is verified up and running
STEP: scaling down the replication controller
Apr 29 13:38:13.149: INFO: scanned /root for discovery docs: 
Apr 29 13:38:13.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6797'
Apr 29 13:38:14.287: INFO: stderr: ""
Apr 29 13:38:14.287: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 13:38:14.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:14.393: INFO: stderr: ""
Apr 29 13:38:14.393: INFO: stdout: "update-demo-nautilus-7q89l update-demo-nautilus-fg7rd "
STEP: Replicas for name=update-demo: expected=1 actual=2
Apr 29 13:38:19.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:19.498: INFO: stderr: ""
Apr 29 13:38:19.498: INFO: stdout: "update-demo-nautilus-fg7rd "
Apr 29 13:38:19.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:19.587: INFO: stderr: ""
Apr 29 13:38:19.587: INFO: stdout: "true"
Apr 29 13:38:19.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:19.695: INFO: stderr: ""
Apr 29 13:38:19.695: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:19.695: INFO: validating pod update-demo-nautilus-fg7rd
Apr 29 13:38:19.699: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:19.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:19.699: INFO: update-demo-nautilus-fg7rd is verified up and running
STEP: scaling up the replication controller
Apr 29 13:38:19.701: INFO: scanned /root for discovery docs: 
Apr 29 13:38:19.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6797'
Apr 29 13:38:20.860: INFO: stderr: ""
Apr 29 13:38:20.860: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 13:38:20.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:20.964: INFO: stderr: ""
Apr 29 13:38:20.964: INFO: stdout: "update-demo-nautilus-fg7rd update-demo-nautilus-m42kx "
Apr 29 13:38:20.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:21.066: INFO: stderr: ""
Apr 29 13:38:21.066: INFO: stdout: "true"
Apr 29 13:38:21.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:21.201: INFO: stderr: ""
Apr 29 13:38:21.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:21.201: INFO: validating pod update-demo-nautilus-fg7rd
Apr 29 13:38:21.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:21.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:21.206: INFO: update-demo-nautilus-fg7rd is verified up and running
Apr 29 13:38:21.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m42kx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:21.298: INFO: stderr: ""
Apr 29 13:38:21.298: INFO: stdout: ""
Apr 29 13:38:21.298: INFO: update-demo-nautilus-m42kx is created but not running
Apr 29 13:38:26.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6797'
Apr 29 13:38:26.413: INFO: stderr: ""
Apr 29 13:38:26.413: INFO: stdout: "update-demo-nautilus-fg7rd update-demo-nautilus-m42kx "
Apr 29 13:38:26.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:26.527: INFO: stderr: ""
Apr 29 13:38:26.527: INFO: stdout: "true"
Apr 29 13:38:26.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fg7rd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:26.622: INFO: stderr: ""
Apr 29 13:38:26.622: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:26.622: INFO: validating pod update-demo-nautilus-fg7rd
Apr 29 13:38:26.626: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:26.626: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:26.626: INFO: update-demo-nautilus-fg7rd is verified up and running
Apr 29 13:38:26.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m42kx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:26.729: INFO: stderr: ""
Apr 29 13:38:26.729: INFO: stdout: "true"
Apr 29 13:38:26.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m42kx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6797'
Apr 29 13:38:26.827: INFO: stderr: ""
Apr 29 13:38:26.827: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 13:38:26.827: INFO: validating pod update-demo-nautilus-m42kx
Apr 29 13:38:26.832: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 13:38:26.832: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 13:38:26.832: INFO: update-demo-nautilus-m42kx is verified up and running
STEP: using delete to clean up resources
Apr 29 13:38:26.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6797'
Apr 29 13:38:26.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 13:38:26.932: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Apr 29 13:38:26.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6797'
Apr 29 13:38:27.039: INFO: stderr: "No resources found in kubectl-6797 namespace.\n"
Apr 29 13:38:27.039: INFO: stdout: ""
Apr 29 13:38:27.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6797 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 13:38:27.138: INFO: stderr: ""
Apr 29 13:38:27.138: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:38:27.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6797" for this suite.

• [SLOW TEST:20.828 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":290,"completed":83,"skipped":1149,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:38:27.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 13:38:31.533: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:38:31.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7310" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":84,"skipped":1161,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:38:31.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod busybox-a00ac5fb-edfb-46e2-9d34-56953c9a3c80 in namespace container-probe-7627
Apr 29 13:38:35.666: INFO: Started pod busybox-a00ac5fb-edfb-46e2-9d34-56953c9a3c80 in namespace container-probe-7627
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 13:38:35.670: INFO: Initial restart count of pod busybox-a00ac5fb-edfb-46e2-9d34-56953c9a3c80 is 0
Apr 29 13:39:30.685: INFO: Restart count of pod container-probe-7627/busybox-a00ac5fb-edfb-46e2-9d34-56953c9a3c80 is now 1 (55.015483051s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:39:30.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7627" for this suite.

• [SLOW TEST:59.182 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":290,"completed":85,"skipped":1222,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:39:30.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Apr 29 13:39:43.404: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 13:39:43.418: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 13:39:45.419: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 13:39:45.441: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 13:39:47.419: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 13:39:47.447: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:39:47.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4939" for this suite.

• [SLOW TEST:16.660 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":290,"completed":86,"skipped":1283,"failed":0}
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:39:47.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:39:47.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16" in namespace "downward-api-8038" to be "Succeeded or Failed"
Apr 29 13:39:47.615: INFO: Pod "downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16": Phase="Pending", Reason="", readiness=false. Elapsed: 70.708463ms
Apr 29 13:39:49.620: INFO: Pod "downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075426181s
Apr 29 13:39:51.624: INFO: Pod "downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079534648s
STEP: Saw pod success
Apr 29 13:39:51.624: INFO: Pod "downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16" satisfied condition "Succeeded or Failed"
Apr 29 13:39:51.628: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16 container client-container: 
STEP: delete the pod
Apr 29 13:39:51.676: INFO: Waiting for pod downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16 to disappear
Apr 29 13:39:51.691: INFO: Pod downwardapi-volume-e8a4015a-f2d5-4d5c-b9d9-0e4a1ab66c16 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:39:51.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8038" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":290,"completed":87,"skipped":1283,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:39:51.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:39:52.548: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:39:54.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764392, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764392, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764392, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764392, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:39:57.611: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:39:57.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6799-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:39:58.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4079" for this suite.
STEP: Destroying namespace "webhook-4079-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.209 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":290,"completed":88,"skipped":1292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:39:58.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:39:58.951: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 13:40:01.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Apr 29 13:40:03.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 create -f -'
Apr 29 13:40:07.898: INFO: stderr: ""
Apr 29 13:40:07.898: INFO: stdout: "e2e-test-crd-publish-openapi-5197-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Apr 29 13:40:07.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 delete e2e-test-crd-publish-openapi-5197-crds test-foo'
Apr 29 13:40:08.009: INFO: stderr: ""
Apr 29 13:40:08.009: INFO: stdout: "e2e-test-crd-publish-openapi-5197-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Apr 29 13:40:08.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 apply -f -'
Apr 29 13:40:08.250: INFO: stderr: ""
Apr 29 13:40:08.250: INFO: stdout: "e2e-test-crd-publish-openapi-5197-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Apr 29 13:40:08.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 delete e2e-test-crd-publish-openapi-5197-crds test-foo'
Apr 29 13:40:08.381: INFO: stderr: ""
Apr 29 13:40:08.381: INFO: stdout: "e2e-test-crd-publish-openapi-5197-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Apr 29 13:40:08.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 create -f -'
Apr 29 13:40:08.643: INFO: rc: 1
Apr 29 13:40:08.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 apply -f -'
Apr 29 13:40:08.915: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Apr 29 13:40:08.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 create -f -'
Apr 29 13:40:09.151: INFO: rc: 1
Apr 29 13:40:09.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8822 apply -f -'
Apr 29 13:40:09.427: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Apr 29 13:40:09.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5197-crds'
Apr 29 13:40:09.653: INFO: stderr: ""
Apr 29 13:40:09.653: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5197-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Apr 29 13:40:09.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5197-crds.metadata'
Apr 29 13:40:09.918: INFO: stderr: ""
Apr 29 13:40:09.918: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5197-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Apr 29 13:40:09.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5197-crds.spec'
Apr 29 13:40:10.173: INFO: stderr: ""
Apr 29 13:40:10.173: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5197-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Apr 29 13:40:10.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5197-crds.spec.bars'
Apr 29 13:40:10.443: INFO: stderr: ""
Apr 29 13:40:10.443: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5197-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works for CR with the same resource name as built-in object
Apr 29 13:40:10.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain ksvc.spec'
Apr 29 13:40:10.687: INFO: stderr: ""
Apr 29 13:40:10.687: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8015-crd\nVERSION:  crd-publish-openapi-test-service.example.com/v1alpha1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of CustomService\n\nFIELDS:\n   dummy\t\n     Dummy property.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Apr 29 13:40:10.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5197-crds.spec.bars2'
Apr 29 13:40:10.948: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:40:16.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8822" for this suite.

• [SLOW TEST:17.884 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":290,"completed":89,"skipped":1317,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:40:16.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-818c6c50-f1ec-4802-9881-fb0cda51b895
STEP: Creating a pod to test consume secrets
Apr 29 13:40:16.891: INFO: Waiting up to 5m0s for pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5" in namespace "secrets-4506" to be "Succeeded or Failed"
Apr 29 13:40:16.907: INFO: Pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.255268ms
Apr 29 13:40:18.927: INFO: Pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035692725s
Apr 29 13:40:20.931: INFO: Pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039939404s
Apr 29 13:40:22.935: INFO: Pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044093994s
STEP: Saw pod success
Apr 29 13:40:22.935: INFO: Pod "pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5" satisfied condition "Succeeded or Failed"
Apr 29 13:40:22.938: INFO: Trying to get logs from node kali-worker pod pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5 container secret-volume-test: 
STEP: delete the pod
Apr 29 13:40:22.971: INFO: Waiting for pod pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5 to disappear
Apr 29 13:40:23.010: INFO: Pod pod-secrets-1962f40f-b3ed-487c-96f7-6819808cf7f5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:40:23.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4506" for this suite.

• [SLOW TEST:6.227 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":90,"skipped":1318,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:40:23.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:40:23.166: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Apr 29 13:40:23.198: INFO: Number of nodes with available pods: 0
Apr 29 13:40:23.198: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Apr 29 13:40:23.295: INFO: Number of nodes with available pods: 0
Apr 29 13:40:23.295: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:24.299: INFO: Number of nodes with available pods: 0
Apr 29 13:40:24.299: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:25.299: INFO: Number of nodes with available pods: 0
Apr 29 13:40:25.299: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:26.299: INFO: Number of nodes with available pods: 1
Apr 29 13:40:26.299: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Apr 29 13:40:26.375: INFO: Number of nodes with available pods: 1
Apr 29 13:40:26.375: INFO: Number of running nodes: 0, number of available pods: 1
Apr 29 13:40:27.379: INFO: Number of nodes with available pods: 0
Apr 29 13:40:27.379: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Apr 29 13:40:27.483: INFO: Number of nodes with available pods: 0
Apr 29 13:40:27.483: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:28.488: INFO: Number of nodes with available pods: 0
Apr 29 13:40:28.488: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:29.488: INFO: Number of nodes with available pods: 0
Apr 29 13:40:29.488: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:30.488: INFO: Number of nodes with available pods: 0
Apr 29 13:40:30.488: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:31.488: INFO: Number of nodes with available pods: 0
Apr 29 13:40:31.488: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:32.502: INFO: Number of nodes with available pods: 0
Apr 29 13:40:32.502: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:33.488: INFO: Number of nodes with available pods: 0
Apr 29 13:40:33.488: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:40:34.488: INFO: Number of nodes with available pods: 1
Apr 29 13:40:34.488: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1811, will wait for the garbage collector to delete the pods
Apr 29 13:40:34.563: INFO: Deleting DaemonSet.extensions daemon-set took: 15.877102ms
Apr 29 13:40:34.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231913ms
Apr 29 13:40:43.483: INFO: Number of nodes with available pods: 0
Apr 29 13:40:43.483: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 13:40:43.488: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1811/daemonsets","resourceVersion":"66629"},"items":null}

Apr 29 13:40:43.491: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1811/pods","resourceVersion":"66629"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:40:43.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1811" for this suite.

• [SLOW TEST:20.515 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":290,"completed":91,"skipped":1375,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:40:43.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:40:43.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:40:45.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764444, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 13:40:47.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764444, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764443, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:40:51.011: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
Apr 29 13:40:58.247: INFO: Waiting for webhook configuration to be ready...
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:41:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2193" for this suite.
STEP: Destroying namespace "webhook-2193-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.998 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":290,"completed":92,"skipped":1389,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:41:03.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
Apr 29 13:41:03.591: INFO: PodSpec: initContainers in spec.initContainers
Apr 29 13:41:53.859: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-972e19f0-8379-44e0-b40d-d1ec6a3d3866", GenerateName:"", Namespace:"init-container-4273", SelfLink:"/api/v1/namespaces/init-container-4273/pods/pod-init-972e19f0-8379-44e0-b40d-d1ec6a3d3866", UID:"eacbf142-cd86-4146-89d6-c4c4d5843a00", ResourceVersion:"66958", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723764463, loc:(*time.Location)(0x7c45300)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"591791395"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000f9f620), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f9f660)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000f9f6a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f9f6c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j46xb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00529ae40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j46xb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j46xb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j46xb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005a02c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c65490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a02d10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a02d30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005a02d38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005a02d3c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764463, loc:(*time.Location)(0x7c45300)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764463, loc:(*time.Location)(0x7c45300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764463, loc:(*time.Location)(0x7c45300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723764463, loc:(*time.Location)(0x7c45300)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.75", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.75"}}, StartTime:(*v1.Time)(0xc000f9f7c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000f9f880), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c65570)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://318077313c0d423eca1105785a72f4d5e2448baf821008f5b16c84a57fdbaf01", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f9f8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f9f7e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc005a02dff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:41:53.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4273" for this suite.

• [SLOW TEST:54.409 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":290,"completed":93,"skipped":1416,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:41:57.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Apr 29 13:41:58.276: INFO: Waiting up to 5m0s for pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c" in namespace "emptydir-1108" to be "Succeeded or Failed"
Apr 29 13:41:58.290: INFO: Pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.06327ms
Apr 29 13:42:00.295: INFO: Pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018902811s
Apr 29 13:42:02.336: INFO: Pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.059252777s
Apr 29 13:42:04.340: INFO: Pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063793913s
STEP: Saw pod success
Apr 29 13:42:04.340: INFO: Pod "pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c" satisfied condition "Succeeded or Failed"
Apr 29 13:42:04.344: INFO: Trying to get logs from node kali-worker2 pod pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c container test-container: 
STEP: delete the pod
Apr 29 13:42:04.432: INFO: Waiting for pod pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c to disappear
Apr 29 13:42:04.502: INFO: Pod pod-09c374c8-d4cb-426e-8ea2-d7386a0fbe6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:42:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1108" for this suite.

• [SLOW TEST:6.573 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":94,"skipped":1418,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:42:04.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:42:04.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4" in namespace "projected-8087" to be "Succeeded or Failed"
Apr 29 13:42:04.666: INFO: Pod "downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.08402ms
Apr 29 13:42:06.671: INFO: Pod "downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019447895s
Apr 29 13:42:08.674: INFO: Pod "downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023207864s
STEP: Saw pod success
Apr 29 13:42:08.675: INFO: Pod "downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4" satisfied condition "Succeeded or Failed"
Apr 29 13:42:08.677: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4 container client-container: 
STEP: delete the pod
Apr 29 13:42:08.741: INFO: Waiting for pod downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4 to disappear
Apr 29 13:42:08.747: INFO: Pod downwardapi-volume-e5c81b39-c073-4c8d-80ac-608ec57897a4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:42:08.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8087" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":290,"completed":95,"skipped":1467,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:42:08.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Apr 29 13:42:08.808: INFO: Created pod &Pod{ObjectMeta:{dns-1605  dns-1605 /api/v1/namespaces/dns-1605/pods/dns-1605 d575c500-0459-44a7-a253-543cfcc604b7 67041 0 2020-04-29 13:42:08 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-04-29 13:42:08 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qhtz8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qhtz8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qhtz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 13:42:08.811: INFO: The status of Pod dns-1605 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:42:10.815: INFO: The status of Pod dns-1605 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:42:13.054: INFO: The status of Pod dns-1605 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Apr 29 13:42:13.054: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1605 PodName:dns-1605 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:42:13.054: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:42:13.104765       7 log.go:172] (0xc00488edc0) (0xc001e605a0) Create stream
I0429 13:42:13.104797       7 log.go:172] (0xc00488edc0) (0xc001e605a0) Stream added, broadcasting: 1
I0429 13:42:13.106702       7 log.go:172] (0xc00488edc0) Reply frame received for 1
I0429 13:42:13.106739       7 log.go:172] (0xc00488edc0) (0xc000e2a000) Create stream
I0429 13:42:13.106749       7 log.go:172] (0xc00488edc0) (0xc000e2a000) Stream added, broadcasting: 3
I0429 13:42:13.107688       7 log.go:172] (0xc00488edc0) Reply frame received for 3
I0429 13:42:13.107729       7 log.go:172] (0xc00488edc0) (0xc001e60820) Create stream
I0429 13:42:13.107747       7 log.go:172] (0xc00488edc0) (0xc001e60820) Stream added, broadcasting: 5
I0429 13:42:13.108834       7 log.go:172] (0xc00488edc0) Reply frame received for 5
I0429 13:42:13.188595       7 log.go:172] (0xc00488edc0) Data frame received for 3
I0429 13:42:13.188628       7 log.go:172] (0xc000e2a000) (3) Data frame handling
I0429 13:42:13.188651       7 log.go:172] (0xc000e2a000) (3) Data frame sent
I0429 13:42:13.189525       7 log.go:172] (0xc00488edc0) Data frame received for 3
I0429 13:42:13.189546       7 log.go:172] (0xc000e2a000) (3) Data frame handling
I0429 13:42:13.189586       7 log.go:172] (0xc00488edc0) Data frame received for 5
I0429 13:42:13.189627       7 log.go:172] (0xc001e60820) (5) Data frame handling
I0429 13:42:13.191370       7 log.go:172] (0xc00488edc0) Data frame received for 1
I0429 13:42:13.191433       7 log.go:172] (0xc001e605a0) (1) Data frame handling
I0429 13:42:13.191495       7 log.go:172] (0xc001e605a0) (1) Data frame sent
I0429 13:42:13.191543       7 log.go:172] (0xc00488edc0) (0xc001e605a0) Stream removed, broadcasting: 1
I0429 13:42:13.191572       7 log.go:172] (0xc00488edc0) Go away received
I0429 13:42:13.191664       7 log.go:172] (0xc00488edc0) (0xc001e605a0) Stream removed, broadcasting: 1
I0429 13:42:13.191694       7 log.go:172] (0xc00488edc0) (0xc000e2a000) Stream removed, broadcasting: 3
I0429 13:42:13.191701       7 log.go:172] (0xc00488edc0) (0xc001e60820) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Apr 29 13:42:13.191: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1605 PodName:dns-1605 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:42:13.191: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:42:13.220100       7 log.go:172] (0xc004ee6dc0) (0xc000b7d680) Create stream
I0429 13:42:13.220129       7 log.go:172] (0xc004ee6dc0) (0xc000b7d680) Stream added, broadcasting: 1
I0429 13:42:13.221842       7 log.go:172] (0xc004ee6dc0) Reply frame received for 1
I0429 13:42:13.221886       7 log.go:172] (0xc004ee6dc0) (0xc0017fa280) Create stream
I0429 13:42:13.221901       7 log.go:172] (0xc004ee6dc0) (0xc0017fa280) Stream added, broadcasting: 3
I0429 13:42:13.222673       7 log.go:172] (0xc004ee6dc0) Reply frame received for 3
I0429 13:42:13.222698       7 log.go:172] (0xc004ee6dc0) (0xc000e2a320) Create stream
I0429 13:42:13.222707       7 log.go:172] (0xc004ee6dc0) (0xc000e2a320) Stream added, broadcasting: 5
I0429 13:42:13.223497       7 log.go:172] (0xc004ee6dc0) Reply frame received for 5
I0429 13:42:13.305608       7 log.go:172] (0xc004ee6dc0) Data frame received for 3
I0429 13:42:13.305632       7 log.go:172] (0xc0017fa280) (3) Data frame handling
I0429 13:42:13.305648       7 log.go:172] (0xc0017fa280) (3) Data frame sent
I0429 13:42:13.306555       7 log.go:172] (0xc004ee6dc0) Data frame received for 3
I0429 13:42:13.306571       7 log.go:172] (0xc004ee6dc0) Data frame received for 5
I0429 13:42:13.306586       7 log.go:172] (0xc000e2a320) (5) Data frame handling
I0429 13:42:13.306610       7 log.go:172] (0xc0017fa280) (3) Data frame handling
I0429 13:42:13.308033       7 log.go:172] (0xc004ee6dc0) Data frame received for 1
I0429 13:42:13.308048       7 log.go:172] (0xc000b7d680) (1) Data frame handling
I0429 13:42:13.308059       7 log.go:172] (0xc000b7d680) (1) Data frame sent
I0429 13:42:13.308071       7 log.go:172] (0xc004ee6dc0) (0xc000b7d680) Stream removed, broadcasting: 1
I0429 13:42:13.308175       7 log.go:172] (0xc004ee6dc0) (0xc000b7d680) Stream removed, broadcasting: 1
I0429 13:42:13.308192       7 log.go:172] (0xc004ee6dc0) (0xc0017fa280) Stream removed, broadcasting: 3
I0429 13:42:13.308210       7 log.go:172] (0xc004ee6dc0) (0xc000e2a320) Stream removed, broadcasting: 5
I0429 13:42:13.308233       7 log.go:172] (0xc004ee6dc0) Go away received
Apr 29 13:42:13.308: INFO: Deleting pod dns-1605...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:42:13.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1605" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":290,"completed":96,"skipped":1470,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:42:13.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:14.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2932" for this suite.

• [SLOW TEST:60.697 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":290,"completed":97,"skipped":1480,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:14.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:43:14.236: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:15.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2216" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":290,"completed":98,"skipped":1480,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:15.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:43:15.424: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b" in namespace "security-context-test-8849" to be "Succeeded or Failed"
Apr 29 13:43:15.427: INFO: Pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511522ms
Apr 29 13:43:17.470: INFO: Pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045934663s
Apr 29 13:43:19.494: INFO: Pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070186406s
Apr 29 13:43:19.494: INFO: Pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b" satisfied condition "Succeeded or Failed"
Apr 29 13:43:19.504: INFO: Got logs for pod "busybox-privileged-false-190a2d87-3120-421a-9992-5fc4d9a14a9b": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:19.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8849" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":99,"skipped":1514,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:19.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Apr 29 13:43:20.009: INFO: Waiting up to 5m0s for pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915" in namespace "downward-api-6011" to be "Succeeded or Failed"
Apr 29 13:43:20.026: INFO: Pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915": Phase="Pending", Reason="", readiness=false. Elapsed: 17.314398ms
Apr 29 13:43:22.031: INFO: Pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02190272s
Apr 29 13:43:24.150: INFO: Pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915": Phase="Running", Reason="", readiness=true. Elapsed: 4.141355175s
Apr 29 13:43:26.155: INFO: Pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146573844s
STEP: Saw pod success
Apr 29 13:43:26.155: INFO: Pod "downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915" satisfied condition "Succeeded or Failed"
Apr 29 13:43:26.159: INFO: Trying to get logs from node kali-worker2 pod downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915 container dapi-container: 
STEP: delete the pod
Apr 29 13:43:26.210: INFO: Waiting for pod downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915 to disappear
Apr 29 13:43:26.214: INFO: Pod downward-api-bbcf4258-6f1f-4bc3-9769-511a0940b915 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:26.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6011" for this suite.

• [SLOW TEST:6.358 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":290,"completed":100,"skipped":1527,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:26.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:43:26.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Apr 29 13:43:29.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4744 create -f -'
Apr 29 13:43:32.849: INFO: stderr: ""
Apr 29 13:43:32.849: INFO: stdout: "e2e-test-crd-publish-openapi-5641-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Apr 29 13:43:32.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4744 delete e2e-test-crd-publish-openapi-5641-crds test-cr'
Apr 29 13:43:32.980: INFO: stderr: ""
Apr 29 13:43:32.980: INFO: stdout: "e2e-test-crd-publish-openapi-5641-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Apr 29 13:43:32.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4744 apply -f -'
Apr 29 13:43:33.245: INFO: stderr: ""
Apr 29 13:43:33.245: INFO: stdout: "e2e-test-crd-publish-openapi-5641-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Apr 29 13:43:33.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4744 delete e2e-test-crd-publish-openapi-5641-crds test-cr'
Apr 29 13:43:33.397: INFO: stderr: ""
Apr 29 13:43:33.397: INFO: stdout: "e2e-test-crd-publish-openapi-5641-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Apr 29 13:43:33.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5641-crds'
Apr 29 13:43:33.642: INFO: stderr: ""
Apr 29 13:43:33.643: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5641-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:36.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4744" for this suite.

• [SLOW TEST:10.371 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":290,"completed":101,"skipped":1540,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:36.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-3ab25dac-998d-49bd-98f1-7c0fce07272b
STEP: Creating a pod to test consume secrets
Apr 29 13:43:36.670: INFO: Waiting up to 5m0s for pod "pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8" in namespace "secrets-4236" to be "Succeeded or Failed"
Apr 29 13:43:36.694: INFO: Pod "pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.077577ms
Apr 29 13:43:38.832: INFO: Pod "pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162419434s
Apr 29 13:43:40.837: INFO: Pod "pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167194168s
STEP: Saw pod success
Apr 29 13:43:40.837: INFO: Pod "pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8" satisfied condition "Succeeded or Failed"
Apr 29 13:43:40.840: INFO: Trying to get logs from node kali-worker pod pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8 container secret-volume-test: 
STEP: delete the pod
Apr 29 13:43:40.891: INFO: Waiting for pod pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8 to disappear
Apr 29 13:43:40.895: INFO: Pod pod-secrets-fb5cf9d1-d4ce-44b5-8b63-a81f346198e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4236" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":102,"skipped":1550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:40.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-3262037f-a1a7-4036-941f-02e75d9473e4
STEP: Creating a pod to test consume secrets
Apr 29 13:43:41.001: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e" in namespace "projected-6849" to be "Succeeded or Failed"
Apr 29 13:43:41.058: INFO: Pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e": Phase="Pending", Reason="", readiness=false. Elapsed: 57.22036ms
Apr 29 13:43:43.062: INFO: Pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061113709s
Apr 29 13:43:45.067: INFO: Pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065717232s
Apr 29 13:43:47.099: INFO: Pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097743784s
STEP: Saw pod success
Apr 29 13:43:47.099: INFO: Pod "pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e" satisfied condition "Succeeded or Failed"
Apr 29 13:43:47.102: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 13:43:47.471: INFO: Waiting for pod pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e to disappear
Apr 29 13:43:47.628: INFO: Pod pod-projected-secrets-6bae6515-6378-4b4d-bc03-47050188764e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:43:47.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6849" for this suite.

• [SLOW TEST:6.729 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":103,"skipped":1572,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:43:47.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-upd-9dba9090-044b-444f-8b09-9279a8a7a0ba
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-9dba9090-044b-444f-8b09-9279a8a7a0ba
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:44:56.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7867" for this suite.

• [SLOW TEST:69.296 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":104,"skipped":1582,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:44:56.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating replication controller my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee
Apr 29 13:44:57.035: INFO: Pod name my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee: Found 0 pods out of 1
Apr 29 13:45:02.038: INFO: Pod name my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee: Found 1 pods out of 1
Apr 29 13:45:02.038: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee" are running
Apr 29 13:45:02.048: INFO: Pod "my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee-l9h7d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:44:57 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:45:00 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:45:00 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-29 13:44:57 +0000 UTC Reason: Message:}])
Apr 29 13:45:02.048: INFO: Trying to dial the pod
Apr 29 13:45:07.062: INFO: Controller my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee: Got expected result from replica 1 [my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee-l9h7d]: "my-hostname-basic-95836eb3-826f-47d3-aaac-1f0405a1d6ee-l9h7d", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:07.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9117" for this suite.

• [SLOW TEST:10.139 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":290,"completed":105,"skipped":1590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:07.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: validating api versions
Apr 29 13:45:07.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
Apr 29 13:45:07.371: INFO: stderr: ""
Apr 29 13:45:07.371: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:07.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-258" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":290,"completed":106,"skipped":1682,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:07.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-72e06412-3f5d-4e8f-bc32-411bede1d28a
STEP: Creating a pod to test consume secrets
Apr 29 13:45:07.486: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da" in namespace "projected-4304" to be "Succeeded or Failed"
Apr 29 13:45:07.600: INFO: Pod "pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da": Phase="Pending", Reason="", readiness=false. Elapsed: 113.526619ms
Apr 29 13:45:09.647: INFO: Pod "pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161252828s
Apr 29 13:45:11.701: INFO: Pod "pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215008546s
STEP: Saw pod success
Apr 29 13:45:11.701: INFO: Pod "pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da" satisfied condition "Succeeded or Failed"
Apr 29 13:45:11.704: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 13:45:11.757: INFO: Waiting for pod pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da to disappear
Apr 29 13:45:11.771: INFO: Pod pod-projected-secrets-1179f349-72cf-43b3-97de-9520eff281da no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:11.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4304" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":107,"skipped":1701,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:11.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-48d19c28-95b8-4a17-a270-146b90a6996a
STEP: Creating a pod to test consume secrets
Apr 29 13:45:11.894: INFO: Waiting up to 5m0s for pod "pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a" in namespace "secrets-5290" to be "Succeeded or Failed"
Apr 29 13:45:11.994: INFO: Pod "pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 99.147293ms
Apr 29 13:45:14.042: INFO: Pod "pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148097237s
Apr 29 13:45:16.047: INFO: Pod "pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152616212s
STEP: Saw pod success
Apr 29 13:45:16.047: INFO: Pod "pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a" satisfied condition "Succeeded or Failed"
Apr 29 13:45:16.050: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a container secret-volume-test: 
STEP: delete the pod
Apr 29 13:45:16.169: INFO: Waiting for pod pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a to disappear
Apr 29 13:45:16.172: INFO: Pod pod-secrets-8b00c453-f991-480b-ade8-0f4a2e02fe5a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:16.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5290" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":108,"skipped":1704,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:16.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1773
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-1773
I0429 13:45:16.488234       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1773, replica count: 2
I0429 13:45:19.538611       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:45:22.538868       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:45:22.538: INFO: Creating new exec pod
Apr 29 13:45:29.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1773 execpodxz2wz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Apr 29 13:45:29.855: INFO: stderr: "I0429 13:45:29.744638    1990 log.go:172] (0xc000be0a50) (0xc0005b9220) Create stream\nI0429 13:45:29.744700    1990 log.go:172] (0xc000be0a50) (0xc0005b9220) Stream added, broadcasting: 1\nI0429 13:45:29.747268    1990 log.go:172] (0xc000be0a50) Reply frame received for 1\nI0429 13:45:29.747298    1990 log.go:172] (0xc000be0a50) (0xc0006d65a0) Create stream\nI0429 13:45:29.747305    1990 log.go:172] (0xc000be0a50) (0xc0006d65a0) Stream added, broadcasting: 3\nI0429 13:45:29.748403    1990 log.go:172] (0xc000be0a50) Reply frame received for 3\nI0429 13:45:29.748435    1990 log.go:172] (0xc000be0a50) (0xc00056a6e0) Create stream\nI0429 13:45:29.748447    1990 log.go:172] (0xc000be0a50) (0xc00056a6e0) Stream added, broadcasting: 5\nI0429 13:45:29.749397    1990 log.go:172] (0xc000be0a50) Reply frame received for 5\nI0429 13:45:29.847971    1990 log.go:172] (0xc000be0a50) Data frame received for 5\nI0429 13:45:29.848010    1990 log.go:172] (0xc00056a6e0) (5) Data frame handling\nI0429 13:45:29.848036    1990 log.go:172] (0xc00056a6e0) (5) Data frame sent\nI0429 13:45:29.848045    1990 log.go:172] (0xc000be0a50) Data frame received for 5\nI0429 13:45:29.848053    1990 log.go:172] (0xc00056a6e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0429 13:45:29.848088    1990 log.go:172] (0xc00056a6e0) (5) Data frame sent\nI0429 13:45:29.848605    1990 log.go:172] (0xc000be0a50) Data frame received for 5\nI0429 13:45:29.848649    1990 log.go:172] (0xc00056a6e0) (5) Data frame handling\nI0429 13:45:29.848683    1990 log.go:172] (0xc000be0a50) Data frame received for 3\nI0429 13:45:29.848699    1990 log.go:172] (0xc0006d65a0) (3) Data frame handling\nI0429 13:45:29.850859    1990 log.go:172] (0xc000be0a50) Data frame received for 1\nI0429 13:45:29.850879    1990 log.go:172] (0xc0005b9220) (1) Data frame handling\nI0429 13:45:29.850889    1990 log.go:172] (0xc0005b9220) (1) Data frame sent\nI0429 13:45:29.850899    1990 log.go:172] (0xc000be0a50) (0xc0005b9220) Stream removed, broadcasting: 1\nI0429 13:45:29.850918    1990 log.go:172] (0xc000be0a50) Go away received\nI0429 13:45:29.851185    1990 log.go:172] (0xc000be0a50) (0xc0005b9220) Stream removed, broadcasting: 1\nI0429 13:45:29.851203    1990 log.go:172] (0xc000be0a50) (0xc0006d65a0) Stream removed, broadcasting: 3\nI0429 13:45:29.851213    1990 log.go:172] (0xc000be0a50) (0xc00056a6e0) Stream removed, broadcasting: 5\n"
Apr 29 13:45:29.855: INFO: stdout: ""
Apr 29 13:45:29.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1773 execpodxz2wz -- /bin/sh -x -c nc -zv -t -w 2 10.101.47.104 80'
Apr 29 13:45:30.053: INFO: stderr: "I0429 13:45:29.990081    2010 log.go:172] (0xc000abb130) (0xc0009c6280) Create stream\nI0429 13:45:29.990143    2010 log.go:172] (0xc000abb130) (0xc0009c6280) Stream added, broadcasting: 1\nI0429 13:45:29.994369    2010 log.go:172] (0xc000abb130) Reply frame received for 1\nI0429 13:45:29.994414    2010 log.go:172] (0xc000abb130) (0xc00084e500) Create stream\nI0429 13:45:29.994424    2010 log.go:172] (0xc000abb130) (0xc00084e500) Stream added, broadcasting: 3\nI0429 13:45:29.995259    2010 log.go:172] (0xc000abb130) Reply frame received for 3\nI0429 13:45:29.995289    2010 log.go:172] (0xc000abb130) (0xc0006321e0) Create stream\nI0429 13:45:29.995299    2010 log.go:172] (0xc000abb130) (0xc0006321e0) Stream added, broadcasting: 5\nI0429 13:45:29.996032    2010 log.go:172] (0xc000abb130) Reply frame received for 5\nI0429 13:45:30.048498    2010 log.go:172] (0xc000abb130) Data frame received for 5\nI0429 13:45:30.048526    2010 log.go:172] (0xc0006321e0) (5) Data frame handling\nI0429 13:45:30.048533    2010 log.go:172] (0xc0006321e0) (5) Data frame sent\nI0429 13:45:30.048539    2010 log.go:172] (0xc000abb130) Data frame received for 5\nI0429 13:45:30.048543    2010 log.go:172] (0xc0006321e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.47.104 80\nConnection to 10.101.47.104 80 port [tcp/http] succeeded!\nI0429 13:45:30.048578    2010 log.go:172] (0xc000abb130) Data frame received for 3\nI0429 13:45:30.048632    2010 log.go:172] (0xc00084e500) (3) Data frame handling\nI0429 13:45:30.050150    2010 log.go:172] (0xc000abb130) Data frame received for 1\nI0429 13:45:30.050172    2010 log.go:172] (0xc0009c6280) (1) Data frame handling\nI0429 13:45:30.050185    2010 log.go:172] (0xc0009c6280) (1) Data frame sent\nI0429 13:45:30.050205    2010 log.go:172] (0xc000abb130) (0xc0009c6280) Stream removed, broadcasting: 1\nI0429 13:45:30.050250    2010 log.go:172] (0xc000abb130) Go away received\nI0429 13:45:30.050516    2010 log.go:172] (0xc000abb130) (0xc0009c6280) Stream removed, broadcasting: 1\nI0429 13:45:30.050533    2010 log.go:172] (0xc000abb130) (0xc00084e500) Stream removed, broadcasting: 3\nI0429 13:45:30.050541    2010 log.go:172] (0xc000abb130) (0xc0006321e0) Stream removed, broadcasting: 5\n"
Apr 29 13:45:30.054: INFO: stdout: ""
Apr 29 13:45:30.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1773 execpodxz2wz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30727'
Apr 29 13:45:30.254: INFO: stderr: "I0429 13:45:30.186365    2032 log.go:172] (0xc000a84790) (0xc00066e780) Create stream\nI0429 13:45:30.186417    2032 log.go:172] (0xc000a84790) (0xc00066e780) Stream added, broadcasting: 1\nI0429 13:45:30.189011    2032 log.go:172] (0xc000a84790) Reply frame received for 1\nI0429 13:45:30.189056    2032 log.go:172] (0xc000a84790) (0xc000606780) Create stream\nI0429 13:45:30.189084    2032 log.go:172] (0xc000a84790) (0xc000606780) Stream added, broadcasting: 3\nI0429 13:45:30.190092    2032 log.go:172] (0xc000a84790) Reply frame received for 3\nI0429 13:45:30.190133    2032 log.go:172] (0xc000a84790) (0xc000606c80) Create stream\nI0429 13:45:30.190152    2032 log.go:172] (0xc000a84790) (0xc000606c80) Stream added, broadcasting: 5\nI0429 13:45:30.190962    2032 log.go:172] (0xc000a84790) Reply frame received for 5\nI0429 13:45:30.247922    2032 log.go:172] (0xc000a84790) Data frame received for 5\nI0429 13:45:30.247976    2032 log.go:172] (0xc000606c80) (5) Data frame handling\nI0429 13:45:30.248015    2032 log.go:172] (0xc000606c80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 30727\nConnection to 172.17.0.15 30727 port [tcp/30727] succeeded!\nI0429 13:45:30.248645    2032 log.go:172] (0xc000a84790) Data frame received for 5\nI0429 13:45:30.248692    2032 log.go:172] (0xc000606c80) (5) Data frame handling\nI0429 13:45:30.248729    2032 log.go:172] (0xc000a84790) Data frame received for 3\nI0429 13:45:30.248750    2032 log.go:172] (0xc000606780) (3) Data frame handling\nI0429 13:45:30.250636    2032 log.go:172] (0xc000a84790) Data frame received for 1\nI0429 13:45:30.250651    2032 log.go:172] (0xc00066e780) (1) Data frame handling\nI0429 13:45:30.250666    2032 log.go:172] (0xc00066e780) (1) Data frame sent\nI0429 13:45:30.250681    2032 log.go:172] (0xc000a84790) (0xc00066e780) Stream removed, broadcasting: 1\nI0429 13:45:30.250770    2032 log.go:172] (0xc000a84790) Go away received\nI0429 13:45:30.250939    2032 log.go:172] (0xc000a84790) (0xc00066e780) Stream removed, broadcasting: 1\nI0429 13:45:30.250950    2032 log.go:172] (0xc000a84790) (0xc000606780) Stream removed, broadcasting: 3\nI0429 13:45:30.250956    2032 log.go:172] (0xc000a84790) (0xc000606c80) Stream removed, broadcasting: 5\n"
Apr 29 13:45:30.254: INFO: stdout: ""
Apr 29 13:45:30.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1773 execpodxz2wz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30727'
Apr 29 13:45:30.498: INFO: stderr: "I0429 13:45:30.400062    2053 log.go:172] (0xc0009898c0) (0xc000817cc0) Create stream\nI0429 13:45:30.400118    2053 log.go:172] (0xc0009898c0) (0xc000817cc0) Stream added, broadcasting: 1\nI0429 13:45:30.403438    2053 log.go:172] (0xc0009898c0) Reply frame received for 1\nI0429 13:45:30.403495    2053 log.go:172] (0xc0009898c0) (0xc000820aa0) Create stream\nI0429 13:45:30.403527    2053 log.go:172] (0xc0009898c0) (0xc000820aa0) Stream added, broadcasting: 3\nI0429 13:45:30.404612    2053 log.go:172] (0xc0009898c0) Reply frame received for 3\nI0429 13:45:30.404673    2053 log.go:172] (0xc0009898c0) (0xc00082a640) Create stream\nI0429 13:45:30.404702    2053 log.go:172] (0xc0009898c0) (0xc00082a640) Stream added, broadcasting: 5\nI0429 13:45:30.405970    2053 log.go:172] (0xc0009898c0) Reply frame received for 5\nI0429 13:45:30.489890    2053 log.go:172] (0xc0009898c0) Data frame received for 5\nI0429 13:45:30.489959    2053 log.go:172] (0xc00082a640) (5) Data frame handling\nI0429 13:45:30.489988    2053 log.go:172] (0xc00082a640) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 30727\nConnection to 172.17.0.18 30727 port [tcp/30727] succeeded!\nI0429 13:45:30.490150    2053 log.go:172] (0xc0009898c0) Data frame received for 3\nI0429 13:45:30.490196    2053 log.go:172] (0xc000820aa0) (3) Data frame handling\nI0429 13:45:30.490238    2053 log.go:172] (0xc0009898c0) Data frame received for 5\nI0429 13:45:30.490276    2053 log.go:172] (0xc00082a640) (5) Data frame handling\nI0429 13:45:30.492005    2053 log.go:172] (0xc0009898c0) Data frame received for 1\nI0429 13:45:30.492043    2053 log.go:172] (0xc000817cc0) (1) Data frame handling\nI0429 13:45:30.492070    2053 log.go:172] (0xc000817cc0) (1) Data frame sent\nI0429 13:45:30.492101    2053 log.go:172] (0xc0009898c0) (0xc000817cc0) Stream removed, broadcasting: 1\nI0429 13:45:30.492152    2053 log.go:172] (0xc0009898c0) Go away received\nI0429 13:45:30.492624    2053 log.go:172] (0xc0009898c0) (0xc000817cc0) Stream removed, broadcasting: 1\nI0429 13:45:30.492649    2053 log.go:172] (0xc0009898c0) (0xc000820aa0) Stream removed, broadcasting: 3\nI0429 13:45:30.492680    2053 log.go:172] (0xc0009898c0) (0xc00082a640) Stream removed, broadcasting: 5\n"
Apr 29 13:45:30.498: INFO: stdout: ""
Apr 29 13:45:30.498: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:30.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1773" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:14.415 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":290,"completed":109,"skipped":1722,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:30.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:45:30.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8549
I0429 13:45:30.706046       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8549, replica count: 1
I0429 13:45:31.756519       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:45:32.756785       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:45:33.757023       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:45:33.881: INFO: Created: latency-svc-7fgw9
Apr 29 13:45:33.912: INFO: Got endpoints: latency-svc-7fgw9 [55.551846ms]
Apr 29 13:45:34.001: INFO: Created: latency-svc-dcnzt
Apr 29 13:45:34.060: INFO: Got endpoints: latency-svc-dcnzt [146.924822ms]
Apr 29 13:45:34.060: INFO: Created: latency-svc-jmkdc
Apr 29 13:45:34.139: INFO: Got endpoints: latency-svc-jmkdc [226.550751ms]
Apr 29 13:45:34.156: INFO: Created: latency-svc-g48sf
Apr 29 13:45:34.171: INFO: Got endpoints: latency-svc-g48sf [258.234524ms]
Apr 29 13:45:34.194: INFO: Created: latency-svc-zrz9h
Apr 29 13:45:34.211: INFO: Got endpoints: latency-svc-zrz9h [297.681832ms]
Apr 29 13:45:34.271: INFO: Created: latency-svc-chsbb
Apr 29 13:45:34.276: INFO: Got endpoints: latency-svc-chsbb [362.640178ms]
Apr 29 13:45:34.307: INFO: Created: latency-svc-fd9fg
Apr 29 13:45:34.320: INFO: Got endpoints: latency-svc-fd9fg [406.887512ms]
Apr 29 13:45:34.349: INFO: Created: latency-svc-bqvj9
Apr 29 13:45:34.420: INFO: Got endpoints: latency-svc-bqvj9 [507.262335ms]
Apr 29 13:45:34.428: INFO: Created: latency-svc-bkczs
Apr 29 13:45:34.470: INFO: Got endpoints: latency-svc-bkczs [557.555492ms]
Apr 29 13:45:34.511: INFO: Created: latency-svc-4dvz9
Apr 29 13:45:34.552: INFO: Got endpoints: latency-svc-4dvz9 [639.217956ms]
Apr 29 13:45:34.564: INFO: Created: latency-svc-lswdv
Apr 29 13:45:34.578: INFO: Got endpoints: latency-svc-lswdv [664.947166ms]
Apr 29 13:45:34.602: INFO: Created: latency-svc-z97fk
Apr 29 13:45:34.614: INFO: Got endpoints: latency-svc-z97fk [701.194456ms]
Apr 29 13:45:34.638: INFO: Created: latency-svc-9wmtf
Apr 29 13:45:34.651: INFO: Got endpoints: latency-svc-9wmtf [738.133103ms]
Apr 29 13:45:34.696: INFO: Created: latency-svc-725bh
Apr 29 13:45:34.705: INFO: Got endpoints: latency-svc-725bh [791.957596ms]
Apr 29 13:45:34.740: INFO: Created: latency-svc-zcmrp
Apr 29 13:45:34.772: INFO: Got endpoints: latency-svc-zcmrp [858.487377ms]
Apr 29 13:45:34.793: INFO: Created: latency-svc-82zfx
Apr 29 13:45:34.857: INFO: Got endpoints: latency-svc-82zfx [944.263932ms]
Apr 29 13:45:34.884: INFO: Created: latency-svc-7vg4z
Apr 29 13:45:34.920: INFO: Got endpoints: latency-svc-7vg4z [859.884861ms]
Apr 29 13:45:35.019: INFO: Created: latency-svc-vnndb
Apr 29 13:45:35.025: INFO: Got endpoints: latency-svc-vnndb [886.334482ms]
Apr 29 13:45:35.063: INFO: Created: latency-svc-b4ztt
Apr 29 13:45:35.084: INFO: Got endpoints: latency-svc-b4ztt [913.302553ms]
Apr 29 13:45:35.106: INFO: Created: latency-svc-vqgw7
Apr 29 13:45:35.145: INFO: Got endpoints: latency-svc-vqgw7 [934.003278ms]
Apr 29 13:45:35.147: INFO: Created: latency-svc-2fbmr
Apr 29 13:45:35.162: INFO: Got endpoints: latency-svc-2fbmr [886.576998ms]
Apr 29 13:45:35.183: INFO: Created: latency-svc-7lxhw
Apr 29 13:45:35.199: INFO: Got endpoints: latency-svc-7lxhw [878.580876ms]
Apr 29 13:45:35.289: INFO: Created: latency-svc-p9ctg
Apr 29 13:45:35.321: INFO: Got endpoints: latency-svc-p9ctg [900.852886ms]
Apr 29 13:45:35.321: INFO: Created: latency-svc-5wcwm
Apr 29 13:45:35.343: INFO: Got endpoints: latency-svc-5wcwm [873.014664ms]
Apr 29 13:45:35.370: INFO: Created: latency-svc-6nsmf
Apr 29 13:45:36.685: INFO: Got endpoints: latency-svc-6nsmf [2.132995302s]
Apr 29 13:45:36.716: INFO: Created: latency-svc-m2n4t
Apr 29 13:45:36.729: INFO: Got endpoints: latency-svc-m2n4t [2.151249117s]
Apr 29 13:45:36.833: INFO: Created: latency-svc-nptzr
Apr 29 13:45:36.837: INFO: Got endpoints: latency-svc-nptzr [2.223232529s]
Apr 29 13:45:36.879: INFO: Created: latency-svc-7r6n5
Apr 29 13:45:36.908: INFO: Got endpoints: latency-svc-7r6n5 [2.257519786s]
Apr 29 13:45:37.001: INFO: Created: latency-svc-4j7n9
Apr 29 13:45:37.030: INFO: Got endpoints: latency-svc-4j7n9 [2.32483479s]
Apr 29 13:45:37.031: INFO: Created: latency-svc-4m84k
Apr 29 13:45:37.059: INFO: Got endpoints: latency-svc-4m84k [2.287814652s]
Apr 29 13:45:37.152: INFO: Created: latency-svc-hr8lg
Apr 29 13:45:37.162: INFO: Got endpoints: latency-svc-hr8lg [2.305183493s]
Apr 29 13:45:37.215: INFO: Created: latency-svc-2dw6p
Apr 29 13:45:37.229: INFO: Got endpoints: latency-svc-2dw6p [2.308789189s]
Apr 29 13:45:37.314: INFO: Created: latency-svc-5gtzk
Apr 29 13:45:37.348: INFO: Got endpoints: latency-svc-5gtzk [2.322119506s]
Apr 29 13:45:37.348: INFO: Created: latency-svc-8gvsl
Apr 29 13:45:37.382: INFO: Got endpoints: latency-svc-8gvsl [2.298235737s]
Apr 29 13:45:37.476: INFO: Created: latency-svc-6rc4k
Apr 29 13:45:37.482: INFO: Got endpoints: latency-svc-6rc4k [2.337204749s]
Apr 29 13:45:37.516: INFO: Created: latency-svc-cdcs7
Apr 29 13:45:37.546: INFO: Got endpoints: latency-svc-cdcs7 [2.383217441s]
Apr 29 13:45:37.624: INFO: Created: latency-svc-fwjmf
Apr 29 13:45:37.664: INFO: Got endpoints: latency-svc-fwjmf [2.465686984s]
Apr 29 13:45:37.665: INFO: Created: latency-svc-t4fjz
Apr 29 13:45:37.702: INFO: Got endpoints: latency-svc-t4fjz [2.380713503s]
Apr 29 13:45:37.763: INFO: Created: latency-svc-2bdmp
Apr 29 13:45:37.765: INFO: Got endpoints: latency-svc-2bdmp [2.42207583s]
Apr 29 13:45:37.791: INFO: Created: latency-svc-cpcwd
Apr 29 13:45:37.806: INFO: Got endpoints: latency-svc-cpcwd [1.120575485s]
Apr 29 13:45:37.832: INFO: Created: latency-svc-km7ld
Apr 29 13:45:37.848: INFO: Got endpoints: latency-svc-km7ld [1.118966698s]
Apr 29 13:45:37.900: INFO: Created: latency-svc-7qhq8
Apr 29 13:45:37.928: INFO: Got endpoints: latency-svc-7qhq8 [1.090419692s]
Apr 29 13:45:37.930: INFO: Created: latency-svc-vd7dm
Apr 29 13:45:37.953: INFO: Got endpoints: latency-svc-vd7dm [1.044788277s]
Apr 29 13:45:37.995: INFO: Created: latency-svc-jf9nb
Apr 29 13:45:38.061: INFO: Got endpoints: latency-svc-jf9nb [1.031079907s]
Apr 29 13:45:38.090: INFO: Created: latency-svc-tn646
Apr 29 13:45:38.101: INFO: Got endpoints: latency-svc-tn646 [1.041844908s]
Apr 29 13:45:38.144: INFO: Created: latency-svc-7cwsg
Apr 29 13:45:38.155: INFO: Got endpoints: latency-svc-7cwsg [993.172862ms]
Apr 29 13:45:38.199: INFO: Created: latency-svc-8r4qp
Apr 29 13:45:38.216: INFO: Got endpoints: latency-svc-8r4qp [987.266617ms]
Apr 29 13:45:38.265: INFO: Created: latency-svc-r56s6
Apr 29 13:45:38.282: INFO: Got endpoints: latency-svc-r56s6 [934.444321ms]
Apr 29 13:45:38.340: INFO: Created: latency-svc-rntjl
Apr 29 13:45:38.344: INFO: Got endpoints: latency-svc-rntjl [961.280721ms]
Apr 29 13:45:38.372: INFO: Created: latency-svc-nbt2c
Apr 29 13:45:38.402: INFO: Got endpoints: latency-svc-nbt2c [920.002407ms]
Apr 29 13:45:38.432: INFO: Created: latency-svc-4ddfb
Apr 29 13:45:38.486: INFO: Got endpoints: latency-svc-4ddfb [940.535243ms]
Apr 29 13:45:38.523: INFO: Created: latency-svc-hnvzn
Apr 29 13:45:38.532: INFO: Got endpoints: latency-svc-hnvzn [868.074012ms]
Apr 29 13:45:38.582: INFO: Created: latency-svc-2kh7k
Apr 29 13:45:38.649: INFO: Got endpoints: latency-svc-2kh7k [946.934132ms]
Apr 29 13:45:38.691: INFO: Created: latency-svc-2hmdm
Apr 29 13:45:38.719: INFO: Got endpoints: latency-svc-2hmdm [953.20044ms]
Apr 29 13:45:38.740: INFO: Created: latency-svc-p9c6m
Apr 29 13:45:38.803: INFO: Got endpoints: latency-svc-p9c6m [997.367676ms]
Apr 29 13:45:38.834: INFO: Created: latency-svc-55nxw
Apr 29 13:45:38.864: INFO: Got endpoints: latency-svc-55nxw [1.015206152s]
Apr 29 13:45:38.953: INFO: Created: latency-svc-php6b
Apr 29 13:45:38.984: INFO: Got endpoints: latency-svc-php6b [1.055849462s]
Apr 29 13:45:39.026: INFO: Created: latency-svc-ct2dt
Apr 29 13:45:39.096: INFO: Got endpoints: latency-svc-ct2dt [1.14341356s]
Apr 29 13:45:39.147: INFO: Created: latency-svc-zgc9z
Apr 29 13:45:39.175: INFO: Got endpoints: latency-svc-zgc9z [1.114321002s]
Apr 29 13:45:39.314: INFO: Created: latency-svc-8skbd
Apr 29 13:45:39.326: INFO: Got endpoints: latency-svc-8skbd [1.224941976s]
Apr 29 13:45:39.381: INFO: Created: latency-svc-qxkgh
Apr 29 13:45:39.474: INFO: Got endpoints: latency-svc-qxkgh [1.318821177s]
Apr 29 13:45:39.475: INFO: Created: latency-svc-xsmdc
Apr 29 13:45:39.523: INFO: Got endpoints: latency-svc-xsmdc [1.307526535s]
Apr 29 13:45:39.573: INFO: Created: latency-svc-9wzg2
Apr 29 13:45:39.654: INFO: Got endpoints: latency-svc-9wzg2 [1.372002923s]
Apr 29 13:45:39.734: INFO: Created: latency-svc-m6hdf
Apr 29 13:45:39.821: INFO: Got endpoints: latency-svc-m6hdf [1.477277096s]
Apr 29 13:45:39.843: INFO: Created: latency-svc-59drl
Apr 29 13:45:39.867: INFO: Created: latency-svc-slfpl
Apr 29 13:45:39.867: INFO: Got endpoints: latency-svc-59drl [1.465277637s]
Apr 29 13:45:39.920: INFO: Got endpoints: latency-svc-slfpl [1.43344738s]
Apr 29 13:45:40.029: INFO: Created: latency-svc-wn8wq
Apr 29 13:45:40.048: INFO: Got endpoints: latency-svc-wn8wq [1.515649021s]
Apr 29 13:45:40.078: INFO: Created: latency-svc-2lxk8
Apr 29 13:45:40.139: INFO: Got endpoints: latency-svc-2lxk8 [1.489963776s]
Apr 29 13:45:40.178: INFO: Created: latency-svc-6tzzr
Apr 29 13:45:40.201: INFO: Got endpoints: latency-svc-6tzzr [1.482455407s]
Apr 29 13:45:40.319: INFO: Created: latency-svc-kqs8c
Apr 29 13:45:40.323: INFO: Got endpoints: latency-svc-kqs8c [1.519202909s]
Apr 29 13:45:40.382: INFO: Created: latency-svc-kmsks
Apr 29 13:45:40.410: INFO: Got endpoints: latency-svc-kmsks [1.546381228s]
Apr 29 13:45:40.462: INFO: Created: latency-svc-579w6
Apr 29 13:45:40.483: INFO: Got endpoints: latency-svc-579w6 [1.498719636s]
Apr 29 13:45:40.520: INFO: Created: latency-svc-pqmb2
Apr 29 13:45:40.536: INFO: Got endpoints: latency-svc-pqmb2 [1.439750755s]
Apr 29 13:45:40.623: INFO: Created: latency-svc-2pcjj
Apr 29 13:45:40.627: INFO: Got endpoints: latency-svc-2pcjj [1.451821218s]
Apr 29 13:45:40.670: INFO: Created: latency-svc-n6qsr
Apr 29 13:45:40.687: INFO: Got endpoints: latency-svc-n6qsr [1.360484205s]
Apr 29 13:45:40.775: INFO: Created: latency-svc-2tsgd
Apr 29 13:45:40.995: INFO: Got endpoints: latency-svc-2tsgd [1.521120751s]
Apr 29 13:45:41.278: INFO: Created: latency-svc-4jgd4
Apr 29 13:45:41.306: INFO: Got endpoints: latency-svc-4jgd4 [1.7829441s]
Apr 29 13:45:41.367: INFO: Created: latency-svc-244xq
Apr 29 13:45:41.456: INFO: Got endpoints: latency-svc-244xq [1.801684891s]
Apr 29 13:45:41.734: INFO: Created: latency-svc-d9tlf
Apr 29 13:45:41.806: INFO: Got endpoints: latency-svc-d9tlf [1.984598347s]
Apr 29 13:45:41.881: INFO: Created: latency-svc-whldx
Apr 29 13:45:41.904: INFO: Got endpoints: latency-svc-whldx [2.036814114s]
Apr 29 13:45:41.977: INFO: Created: latency-svc-gskjs
Apr 29 13:45:42.049: INFO: Got endpoints: latency-svc-gskjs [2.129243862s]
Apr 29 13:45:42.075: INFO: Created: latency-svc-nlgqc
Apr 29 13:45:42.115: INFO: Got endpoints: latency-svc-nlgqc [2.06720716s]
Apr 29 13:45:42.186: INFO: Created: latency-svc-xm56s
Apr 29 13:45:42.190: INFO: Got endpoints: latency-svc-xm56s [2.051403938s]
Apr 29 13:45:42.495: INFO: Created: latency-svc-kxkr2
Apr 29 13:45:42.627: INFO: Got endpoints: latency-svc-kxkr2 [2.425866939s]
Apr 29 13:45:42.797: INFO: Created: latency-svc-ht9cp
Apr 29 13:45:42.851: INFO: Got endpoints: latency-svc-ht9cp [2.528539444s]
Apr 29 13:45:43.118: INFO: Created: latency-svc-62jwh
Apr 29 13:45:43.313: INFO: Got endpoints: latency-svc-62jwh [2.903143835s]
Apr 29 13:45:43.397: INFO: Created: latency-svc-lxlb8
Apr 29 13:45:43.408: INFO: Got endpoints: latency-svc-lxlb8 [2.925069657s]
Apr 29 13:45:43.474: INFO: Created: latency-svc-6pjxb
Apr 29 13:45:43.504: INFO: Got endpoints: latency-svc-6pjxb [2.967736395s]
Apr 29 13:45:43.539: INFO: Created: latency-svc-nntng
Apr 29 13:45:43.558: INFO: Got endpoints: latency-svc-nntng [2.930951265s]
Apr 29 13:45:43.624: INFO: Created: latency-svc-h5qs9
Apr 29 13:45:43.627: INFO: Got endpoints: latency-svc-h5qs9 [2.939942022s]
Apr 29 13:45:43.703: INFO: Created: latency-svc-zq274
Apr 29 13:45:43.720: INFO: Got endpoints: latency-svc-zq274 [2.725003958s]
Apr 29 13:45:43.785: INFO: Created: latency-svc-zmbjz
Apr 29 13:45:43.812: INFO: Got endpoints: latency-svc-zmbjz [2.505203694s]
Apr 29 13:45:43.863: INFO: Created: latency-svc-mgbns
Apr 29 13:45:43.955: INFO: Got endpoints: latency-svc-mgbns [2.498585429s]
Apr 29 13:45:43.983: INFO: Created: latency-svc-89rqq
Apr 29 13:45:44.021: INFO: Got endpoints: latency-svc-89rqq [2.215515686s]
Apr 29 13:45:44.115: INFO: Created: latency-svc-2cb9m
Apr 29 13:45:44.160: INFO: Got endpoints: latency-svc-2cb9m [2.25555807s]
Apr 29 13:45:44.283: INFO: Created: latency-svc-r6g62
Apr 29 13:45:44.325: INFO: Got endpoints: latency-svc-r6g62 [2.276029868s]
Apr 29 13:45:44.452: INFO: Created: latency-svc-57tx2
Apr 29 13:45:44.684: INFO: Got endpoints: latency-svc-57tx2 [2.569036168s]
Apr 29 13:45:44.693: INFO: Created: latency-svc-4gh9m
Apr 29 13:45:44.713: INFO: Got endpoints: latency-svc-4gh9m [2.522394812s]
Apr 29 13:45:44.882: INFO: Created: latency-svc-xb9vt
Apr 29 13:45:44.919: INFO: Got endpoints: latency-svc-xb9vt [2.291792539s]
Apr 29 13:45:45.067: INFO: Created: latency-svc-jsn2n
Apr 29 13:45:45.083: INFO: Got endpoints: latency-svc-jsn2n [2.232068504s]
Apr 29 13:45:45.117: INFO: Created: latency-svc-qjlcn
Apr 29 13:45:45.162: INFO: Got endpoints: latency-svc-qjlcn [1.8487758s]
Apr 29 13:45:45.247: INFO: Created: latency-svc-nfc4g
Apr 29 13:45:45.292: INFO: Got endpoints: latency-svc-nfc4g [1.884557252s]
Apr 29 13:45:45.408: INFO: Created: latency-svc-g7lmh
Apr 29 13:45:45.444: INFO: Got endpoints: latency-svc-g7lmh [1.939445999s]
Apr 29 13:45:45.690: INFO: Created: latency-svc-fwk87
Apr 29 13:45:45.695: INFO: Got endpoints: latency-svc-fwk87 [2.136756134s]
Apr 29 13:45:45.915: INFO: Created: latency-svc-h7k4v
Apr 29 13:45:45.925: INFO: Got endpoints: latency-svc-h7k4v [2.29771521s]
Apr 29 13:45:45.981: INFO: Created: latency-svc-2p6gz
Apr 29 13:45:45.998: INFO: Got endpoints: latency-svc-2p6gz [2.27730055s]
Apr 29 13:45:46.037: INFO: Created: latency-svc-gdmzc
Apr 29 13:45:46.056: INFO: Got endpoints: latency-svc-gdmzc [2.244355753s]
Apr 29 13:45:46.109: INFO: Created: latency-svc-s8vs7
Apr 29 13:45:46.204: INFO: Got endpoints: latency-svc-s8vs7 [2.249749968s]
Apr 29 13:45:46.229: INFO: Created: latency-svc-bwcqt
Apr 29 13:45:46.250: INFO: Got endpoints: latency-svc-bwcqt [2.229022651s]
Apr 29 13:45:46.277: INFO: Created: latency-svc-t68r5
Apr 29 13:45:46.378: INFO: Got endpoints: latency-svc-t68r5 [2.218273943s]
Apr 29 13:45:46.381: INFO: Created: latency-svc-g6p7c
Apr 29 13:45:46.388: INFO: Got endpoints: latency-svc-g6p7c [2.062860751s]
Apr 29 13:45:46.415: INFO: Created: latency-svc-74vct
Apr 29 13:45:46.433: INFO: Got endpoints: latency-svc-74vct [1.747964993s]
Apr 29 13:45:46.455: INFO: Created: latency-svc-xl8vp
Apr 29 13:45:46.473: INFO: Got endpoints: latency-svc-xl8vp [1.760055548s]
Apr 29 13:45:46.528: INFO: Created: latency-svc-b7fgh
Apr 29 13:45:46.559: INFO: Got endpoints: latency-svc-b7fgh [1.640259775s]
Apr 29 13:45:46.559: INFO: Created: latency-svc-pk2jl
Apr 29 13:45:46.589: INFO: Got endpoints: latency-svc-pk2jl [1.505634953s]
Apr 29 13:45:46.619: INFO: Created: latency-svc-cdxp5
Apr 29 13:45:46.659: INFO: Got endpoints: latency-svc-cdxp5 [1.497296761s]
Apr 29 13:45:46.678: INFO: Created: latency-svc-95bvw
Apr 29 13:45:46.690: INFO: Got endpoints: latency-svc-95bvw [1.397522167s]
Apr 29 13:45:46.714: INFO: Created: latency-svc-vz9j9
Apr 29 13:45:46.726: INFO: Got endpoints: latency-svc-vz9j9 [1.282534198s]
Apr 29 13:45:46.755: INFO: Created: latency-svc-lvfvz
Apr 29 13:45:46.809: INFO: Got endpoints: latency-svc-lvfvz [1.113966861s]
Apr 29 13:45:46.847: INFO: Created: latency-svc-9pkkw
Apr 29 13:45:46.876: INFO: Got endpoints: latency-svc-9pkkw [951.312624ms]
Apr 29 13:45:46.959: INFO: Created: latency-svc-ftk8b
Apr 29 13:45:46.974: INFO: Got endpoints: latency-svc-ftk8b [976.320754ms]
Apr 29 13:45:47.003: INFO: Created: latency-svc-qgm4t
Apr 29 13:45:47.021: INFO: Got endpoints: latency-svc-qgm4t [964.690634ms]
Apr 29 13:45:47.044: INFO: Created: latency-svc-f6h5r
Apr 29 13:45:47.058: INFO: Got endpoints: latency-svc-f6h5r [853.258051ms]
Apr 29 13:45:47.110: INFO: Created: latency-svc-zcxtp
Apr 29 13:45:47.122: INFO: Got endpoints: latency-svc-zcxtp [871.975776ms]
Apr 29 13:45:47.164: INFO: Created: latency-svc-mnxsf
Apr 29 13:45:47.252: INFO: Got endpoints: latency-svc-mnxsf [874.15406ms]
Apr 29 13:45:47.255: INFO: Created: latency-svc-frbmc
Apr 29 13:45:47.308: INFO: Got endpoints: latency-svc-frbmc [920.273201ms]
Apr 29 13:45:47.671: INFO: Created: latency-svc-kfsf6
Apr 29 13:45:47.698: INFO: Got endpoints: latency-svc-kfsf6 [1.265802433s]
Apr 29 13:45:47.936: INFO: Created: latency-svc-22fcr
Apr 29 13:45:47.956: INFO: Got endpoints: latency-svc-22fcr [1.482569055s]
Apr 29 13:45:48.033: INFO: Created: latency-svc-j8q9f
Apr 29 13:45:48.109: INFO: Got endpoints: latency-svc-j8q9f [1.54939166s]
Apr 29 13:45:48.135: INFO: Created: latency-svc-2fcqq
Apr 29 13:45:48.177: INFO: Got endpoints: latency-svc-2fcqq [1.588149816s]
Apr 29 13:45:48.247: INFO: Created: latency-svc-t969t
Apr 29 13:45:48.249: INFO: Got endpoints: latency-svc-t969t [1.590086193s]
Apr 29 13:45:48.305: INFO: Created: latency-svc-2kxhm
Apr 29 13:45:48.323: INFO: Got endpoints: latency-svc-2kxhm [1.632727923s]
Apr 29 13:45:48.385: INFO: Created: latency-svc-nv72d
Apr 29 13:45:48.401: INFO: Got endpoints: latency-svc-nv72d [1.674338986s]
Apr 29 13:45:48.454: INFO: Created: latency-svc-2g98b
Apr 29 13:45:48.480: INFO: Got endpoints: latency-svc-2g98b [1.670329534s]
Apr 29 13:45:48.594: INFO: Created: latency-svc-w89ss
Apr 29 13:45:48.821: INFO: Got endpoints: latency-svc-w89ss [1.945463946s]
Apr 29 13:45:48.892: INFO: Created: latency-svc-9v4th
Apr 29 13:45:49.110: INFO: Got endpoints: latency-svc-9v4th [2.135397512s]
Apr 29 13:45:49.313: INFO: Created: latency-svc-bhq5c
Apr 29 13:45:49.331: INFO: Got endpoints: latency-svc-bhq5c [2.309663811s]
Apr 29 13:45:49.360: INFO: Created: latency-svc-snxd8
Apr 29 13:45:49.379: INFO: Got endpoints: latency-svc-snxd8 [2.321604031s]
Apr 29 13:45:49.702: INFO: Created: latency-svc-tjchv
Apr 29 13:45:49.763: INFO: Got endpoints: latency-svc-tjchv [2.640455324s]
Apr 29 13:45:49.764: INFO: Created: latency-svc-dcgk6
Apr 29 13:45:49.840: INFO: Got endpoints: latency-svc-dcgk6 [2.587268613s]
Apr 29 13:45:49.852: INFO: Created: latency-svc-g86h8
Apr 29 13:45:49.895: INFO: Got endpoints: latency-svc-g86h8 [2.586171737s]
Apr 29 13:45:49.983: INFO: Created: latency-svc-jnwww
Apr 29 13:45:49.992: INFO: Got endpoints: latency-svc-jnwww [2.293306339s]
Apr 29 13:45:50.022: INFO: Created: latency-svc-zlm7g
Apr 29 13:45:50.045: INFO: Got endpoints: latency-svc-zlm7g [2.089511425s]
Apr 29 13:45:50.312: INFO: Created: latency-svc-z79cc
Apr 29 13:45:50.329: INFO: Got endpoints: latency-svc-z79cc [2.220531364s]
Apr 29 13:45:50.399: INFO: Created: latency-svc-r8p57
Apr 29 13:45:50.453: INFO: Got endpoints: latency-svc-r8p57 [2.275883219s]
Apr 29 13:45:50.491: INFO: Created: latency-svc-8rl22
Apr 29 13:45:50.502: INFO: Got endpoints: latency-svc-8rl22 [2.252708515s]
Apr 29 13:45:50.528: INFO: Created: latency-svc-pkf76
Apr 29 13:45:50.588: INFO: Got endpoints: latency-svc-pkf76 [2.264824615s]
Apr 29 13:45:50.590: INFO: Created: latency-svc-r6lz8
Apr 29 13:45:50.600: INFO: Got endpoints: latency-svc-r6lz8 [2.198938912s]
Apr 29 13:45:50.627: INFO: Created: latency-svc-pg22h
Apr 29 13:45:50.651: INFO: Got endpoints: latency-svc-pg22h [2.171030616s]
Apr 29 13:45:50.681: INFO: Created: latency-svc-vllmq
Apr 29 13:45:50.756: INFO: Got endpoints: latency-svc-vllmq [1.934513016s]
Apr 29 13:45:50.758: INFO: Created: latency-svc-mb5nk
Apr 29 13:45:50.762: INFO: Got endpoints: latency-svc-mb5nk [1.652068489s]
Apr 29 13:45:50.791: INFO: Created: latency-svc-tvg5z
Apr 29 13:45:50.804: INFO: Got endpoints: latency-svc-tvg5z [1.473248751s]
Apr 29 13:45:50.831: INFO: Created: latency-svc-grbbm
Apr 29 13:45:50.954: INFO: Got endpoints: latency-svc-grbbm [1.574105493s]
Apr 29 13:45:50.956: INFO: Created: latency-svc-7td6f
Apr 29 13:45:50.960: INFO: Got endpoints: latency-svc-7td6f [1.196966342s]
Apr 29 13:45:51.036: INFO: Created: latency-svc-t2v6n
Apr 29 13:45:51.051: INFO: Got endpoints: latency-svc-t2v6n [1.210968903s]
Apr 29 13:45:51.103: INFO: Created: latency-svc-sw9br
Apr 29 13:45:51.107: INFO: Got endpoints: latency-svc-sw9br [1.212815936s]
Apr 29 13:45:51.137: INFO: Created: latency-svc-c7rxp
Apr 29 13:45:51.156: INFO: Got endpoints: latency-svc-c7rxp [1.16412583s]
Apr 29 13:45:51.187: INFO: Created: latency-svc-9zjvq
Apr 29 13:45:51.201: INFO: Got endpoints: latency-svc-9zjvq [1.156085078s]
Apr 29 13:45:51.253: INFO: Created: latency-svc-wvsmr
Apr 29 13:45:51.262: INFO: Got endpoints: latency-svc-wvsmr [932.542601ms]
Apr 29 13:45:51.287: INFO: Created: latency-svc-4ghzx
Apr 29 13:45:51.304: INFO: Got endpoints: latency-svc-4ghzx [850.953516ms]
Apr 29 13:45:51.330: INFO: Created: latency-svc-cwnrl
Apr 29 13:45:51.347: INFO: Got endpoints: latency-svc-cwnrl [844.289745ms]
Apr 29 13:45:51.396: INFO: Created: latency-svc-9tbkf
Apr 29 13:45:51.420: INFO: Got endpoints: latency-svc-9tbkf [832.531224ms]
Apr 29 13:45:51.458: INFO: Created: latency-svc-b7h6f
Apr 29 13:45:51.473: INFO: Got endpoints: latency-svc-b7h6f [873.332393ms]
Apr 29 13:45:51.491: INFO: Created: latency-svc-jqhh6
Apr 29 13:45:51.528: INFO: Got endpoints: latency-svc-jqhh6 [877.039605ms]
Apr 29 13:45:51.545: INFO: Created: latency-svc-rc62l
Apr 29 13:45:51.592: INFO: Got endpoints: latency-svc-rc62l [836.144994ms]
Apr 29 13:45:51.768: INFO: Created: latency-svc-7gwnp
Apr 29 13:45:51.784: INFO: Got endpoints: latency-svc-7gwnp [1.022131515s]
Apr 29 13:45:51.971: INFO: Created: latency-svc-b22pj
Apr 29 13:45:51.989: INFO: Got endpoints: latency-svc-b22pj [1.184893632s]
Apr 29 13:45:52.032: INFO: Created: latency-svc-sql6m
Apr 29 13:45:52.048: INFO: Got endpoints: latency-svc-sql6m [1.094504112s]
Apr 29 13:45:52.141: INFO: Created: latency-svc-crbs4
Apr 29 13:45:52.150: INFO: Got endpoints: latency-svc-crbs4 [1.190404085s]
Apr 29 13:45:52.188: INFO: Created: latency-svc-whr6w
Apr 29 13:45:52.217: INFO: Got endpoints: latency-svc-whr6w [1.166108745s]
Apr 29 13:45:52.282: INFO: Created: latency-svc-mldpr
Apr 29 13:45:52.296: INFO: Got endpoints: latency-svc-mldpr [1.188126199s]
Apr 29 13:45:52.319: INFO: Created: latency-svc-m72qc
Apr 29 13:45:52.362: INFO: Got endpoints: latency-svc-m72qc [1.205627457s]
Apr 29 13:45:52.420: INFO: Created: latency-svc-l6k4g
Apr 29 13:45:52.433: INFO: Got endpoints: latency-svc-l6k4g [1.232177925s]
Apr 29 13:45:52.452: INFO: Created: latency-svc-2hqsk
Apr 29 13:45:52.467: INFO: Got endpoints: latency-svc-2hqsk [1.205014084s]
Apr 29 13:45:52.487: INFO: Created: latency-svc-l8fh6
Apr 29 13:45:52.500: INFO: Got endpoints: latency-svc-l8fh6 [1.195405939s]
Apr 29 13:45:52.576: INFO: Created: latency-svc-bw288
Apr 29 13:45:52.590: INFO: Got endpoints: latency-svc-bw288 [1.243428523s]
Apr 29 13:45:52.609: INFO: Created: latency-svc-qccwd
Apr 29 13:45:52.626: INFO: Got endpoints: latency-svc-qccwd [1.205984185s]
Apr 29 13:45:52.651: INFO: Created: latency-svc-mbk75
Apr 29 13:45:52.662: INFO: Got endpoints: latency-svc-mbk75 [1.189321781s]
Apr 29 13:45:52.738: INFO: Created: latency-svc-bcj9k
Apr 29 13:45:52.765: INFO: Got endpoints: latency-svc-bcj9k [1.237422979s]
Apr 29 13:45:52.786: INFO: Created: latency-svc-gd8nr
Apr 29 13:45:52.899: INFO: Got endpoints: latency-svc-gd8nr [1.306667041s]
Apr 29 13:45:52.902: INFO: Created: latency-svc-t58dv
Apr 29 13:45:52.916: INFO: Got endpoints: latency-svc-t58dv [1.131709787s]
Apr 29 13:45:52.954: INFO: Created: latency-svc-b8pj9
Apr 29 13:45:52.982: INFO: Got endpoints: latency-svc-b8pj9 [993.344528ms]
Apr 29 13:45:53.037: INFO: Created: latency-svc-lzg8g
Apr 29 13:45:53.069: INFO: Got endpoints: latency-svc-lzg8g [1.020393532s]
Apr 29 13:45:53.071: INFO: Created: latency-svc-cdj5f
Apr 29 13:45:53.100: INFO: Got endpoints: latency-svc-cdj5f [950.091476ms]
Apr 29 13:45:53.136: INFO: Created: latency-svc-xctwq
Apr 29 13:45:53.175: INFO: Got endpoints: latency-svc-xctwq [957.713242ms]
Apr 29 13:45:53.195: INFO: Created: latency-svc-cctk9
Apr 29 13:45:53.231: INFO: Got endpoints: latency-svc-cctk9 [935.238422ms]
Apr 29 13:45:53.313: INFO: Created: latency-svc-78hx5
Apr 29 13:45:53.316: INFO: Got endpoints: latency-svc-78hx5 [954.223596ms]
Apr 29 13:45:53.388: INFO: Created: latency-svc-k89kv
Apr 29 13:45:53.404: INFO: Got endpoints: latency-svc-k89kv [970.161417ms]
Apr 29 13:45:53.463: INFO: Created: latency-svc-mcnch
Apr 29 13:45:53.478: INFO: Got endpoints: latency-svc-mcnch [1.01082122s]
Apr 29 13:45:53.506: INFO: Created: latency-svc-hwjkc
Apr 29 13:45:53.518: INFO: Got endpoints: latency-svc-hwjkc [1.018793812s]
Apr 29 13:45:53.543: INFO: Created: latency-svc-bpl8g
Apr 29 13:45:53.555: INFO: Got endpoints: latency-svc-bpl8g [964.472927ms]
Apr 29 13:45:53.594: INFO: Created: latency-svc-jwbmv
Apr 29 13:45:53.602: INFO: Got endpoints: latency-svc-jwbmv [976.123096ms]
Apr 29 13:45:53.628: INFO: Created: latency-svc-fbvb9
Apr 29 13:45:53.646: INFO: Got endpoints: latency-svc-fbvb9 [983.287051ms]
Apr 29 13:45:53.668: INFO: Created: latency-svc-nf467
Apr 29 13:45:53.693: INFO: Got endpoints: latency-svc-nf467 [928.109307ms]
Apr 29 13:45:53.750: INFO: Created: latency-svc-bwdwl
Apr 29 13:45:53.754: INFO: Got endpoints: latency-svc-bwdwl [854.951675ms]
Apr 29 13:45:53.808: INFO: Created: latency-svc-mnmhz
Apr 29 13:45:53.826: INFO: Got endpoints: latency-svc-mnmhz [910.07801ms]
Apr 29 13:45:53.844: INFO: Created: latency-svc-b88hq
Apr 29 13:45:53.905: INFO: Got endpoints: latency-svc-b88hq [922.589683ms]
Apr 29 13:45:53.927: INFO: Created: latency-svc-9rd4j
Apr 29 13:45:53.952: INFO: Got endpoints: latency-svc-9rd4j [883.091218ms]
Apr 29 13:45:53.994: INFO: Created: latency-svc-t4vjp
Apr 29 13:45:54.048: INFO: Got endpoints: latency-svc-t4vjp [947.56297ms]
Apr 29 13:45:54.100: INFO: Created: latency-svc-t4729
Apr 29 13:45:54.175: INFO: Got endpoints: latency-svc-t4729 [1.000256594s]
Apr 29 13:45:54.198: INFO: Created: latency-svc-9prpk
Apr 29 13:45:54.211: INFO: Got endpoints: latency-svc-9prpk [979.846266ms]
Apr 29 13:45:54.211: INFO: Latencies: [146.924822ms 226.550751ms 258.234524ms 297.681832ms 362.640178ms 406.887512ms 507.262335ms 557.555492ms 639.217956ms 664.947166ms 701.194456ms 738.133103ms 791.957596ms 832.531224ms 836.144994ms 844.289745ms 850.953516ms 853.258051ms 854.951675ms 858.487377ms 859.884861ms 868.074012ms 871.975776ms 873.014664ms 873.332393ms 874.15406ms 877.039605ms 878.580876ms 883.091218ms 886.334482ms 886.576998ms 900.852886ms 910.07801ms 913.302553ms 920.002407ms 920.273201ms 922.589683ms 928.109307ms 932.542601ms 934.003278ms 934.444321ms 935.238422ms 940.535243ms 944.263932ms 946.934132ms 947.56297ms 950.091476ms 951.312624ms 953.20044ms 954.223596ms 957.713242ms 961.280721ms 964.472927ms 964.690634ms 970.161417ms 976.123096ms 976.320754ms 979.846266ms 983.287051ms 987.266617ms 993.172862ms 993.344528ms 997.367676ms 1.000256594s 1.01082122s 1.015206152s 1.018793812s 1.020393532s 1.022131515s 1.031079907s 1.041844908s 1.044788277s 1.055849462s 1.090419692s 1.094504112s 1.113966861s 1.114321002s 1.118966698s 1.120575485s 1.131709787s 1.14341356s 1.156085078s 1.16412583s 1.166108745s 1.184893632s 1.188126199s 1.189321781s 1.190404085s 1.195405939s 1.196966342s 1.205014084s 1.205627457s 1.205984185s 1.210968903s 1.212815936s 1.224941976s 1.232177925s 1.237422979s 1.243428523s 1.265802433s 1.282534198s 1.306667041s 1.307526535s 1.318821177s 1.360484205s 1.372002923s 1.397522167s 1.43344738s 1.439750755s 1.451821218s 1.465277637s 1.473248751s 1.477277096s 1.482455407s 1.482569055s 1.489963776s 1.497296761s 1.498719636s 1.505634953s 1.515649021s 1.519202909s 1.521120751s 1.546381228s 1.54939166s 1.574105493s 1.588149816s 1.590086193s 1.632727923s 1.640259775s 1.652068489s 1.670329534s 1.674338986s 1.747964993s 1.760055548s 1.7829441s 1.801684891s 1.8487758s 1.884557252s 1.934513016s 1.939445999s 1.945463946s 1.984598347s 2.036814114s 2.051403938s 2.062860751s 2.06720716s 2.089511425s 2.129243862s 2.132995302s 2.135397512s 2.136756134s 2.151249117s 2.171030616s 2.198938912s 2.215515686s 2.218273943s 2.220531364s 2.223232529s 2.229022651s 2.232068504s 2.244355753s 2.249749968s 2.252708515s 2.25555807s 2.257519786s 2.264824615s 2.275883219s 2.276029868s 2.27730055s 2.287814652s 2.291792539s 2.293306339s 2.29771521s 2.298235737s 2.305183493s 2.308789189s 2.309663811s 2.321604031s 2.322119506s 2.32483479s 2.337204749s 2.380713503s 2.383217441s 2.42207583s 2.425866939s 2.465686984s 2.498585429s 2.505203694s 2.522394812s 2.528539444s 2.569036168s 2.586171737s 2.587268613s 2.640455324s 2.725003958s 2.903143835s 2.925069657s 2.930951265s 2.939942022s 2.967736395s]
Apr 29 13:45:54.211: INFO: 50 %ile: 1.282534198s
Apr 29 13:45:54.211: INFO: 90 %ile: 2.337204749s
Apr 29 13:45:54.211: INFO: 99 %ile: 2.939942022s
Apr 29 13:45:54.211: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:54.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8549" for this suite.

• [SLOW TEST:23.640 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":290,"completed":110,"skipped":1735,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:54.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a test event
STEP: listing all events in all namespaces
STEP: patching the test event
STEP: fetching the test event
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:45:54.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-925" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":290,"completed":111,"skipped":1755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:45:54.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:45:54.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685" in namespace "projected-6059" to be "Succeeded or Failed"
Apr 29 13:45:54.461: INFO: Pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38489ms
Apr 29 13:45:56.465: INFO: Pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008706778s
Apr 29 13:45:58.469: INFO: Pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685": Phase="Running", Reason="", readiness=true. Elapsed: 4.012764701s
Apr 29 13:46:00.577: INFO: Pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120501202s
STEP: Saw pod success
Apr 29 13:46:00.577: INFO: Pod "downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685" satisfied condition "Succeeded or Failed"
Apr 29 13:46:00.581: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685 container client-container: 
STEP: delete the pod
Apr 29 13:46:00.820: INFO: Waiting for pod downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685 to disappear
Apr 29 13:46:00.997: INFO: Pod downwardapi-volume-e8d35a2d-48a0-40b4-a746-d168eac09685 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:46:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6059" for this suite.

• [SLOW TEST:6.741 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":112,"skipped":1785,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:46:01.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod liveness-cbc8968a-c3ac-44a2-97ac-9bd9385647c6 in namespace container-probe-4365
Apr 29 13:46:07.302: INFO: Started pod liveness-cbc8968a-c3ac-44a2-97ac-9bd9385647c6 in namespace container-probe-4365
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 13:46:07.312: INFO: Initial restart count of pod liveness-cbc8968a-c3ac-44a2-97ac-9bd9385647c6 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:50:07.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4365" for this suite.

• [SLOW TEST:246.798 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":290,"completed":113,"skipped":1801,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:50:07.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 13:50:14.364: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:50:14.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-175" for this suite.

• [SLOW TEST:6.790 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":290,"completed":114,"skipped":1823,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:50:14.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:50:15.507: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:50:17.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765015, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765015, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765015, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765015, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:50:20.768: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Apr 29 13:50:20.791: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:50:20.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8917" for this suite.
STEP: Destroying namespace "webhook-8917-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.234 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":290,"completed":115,"skipped":1827,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:50:20.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:50:21.039: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c421d879-8de7-4c39-8f7e-5473282a222e", Controller:(*bool)(0xc002f5f7c6), BlockOwnerDeletion:(*bool)(0xc002f5f7c7)}}
Apr 29 13:50:21.052: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"786dac34-7f27-4583-a91f-bba96fa3dbb0", Controller:(*bool)(0xc002f955d6), BlockOwnerDeletion:(*bool)(0xc002f955d7)}}
Apr 29 13:50:21.141: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1589e57a-8656-45f6-9d64-8e7faa06b55f", Controller:(*bool)(0xc003211ea6), BlockOwnerDeletion:(*bool)(0xc003211ea7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:50:26.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-815" for this suite.

• [SLOW TEST:5.279 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":290,"completed":116,"skipped":1834,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:50:26.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2687.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2687.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2687.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2687.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.153_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2687.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2687.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2687.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2687.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2687.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2687.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 153.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.153_udp@PTR;check="$$(dig +tcp +noall +answer +search 153.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.153_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 13:50:36.523: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.533: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.554: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.559: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.562: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:36.579: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:50:41.618: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.625: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.629: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.668: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.670: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.673: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:41.692: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:50:46.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.831: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.853: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.858: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:46.898: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:50:51.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.609: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.612: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.632: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.640: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:51.656: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:50:56.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.735: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.740: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.817: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.821: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.824: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.828: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:50:56.920: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:51:01.584: INFO: Unable to read wheezy_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.587: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.593: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.611: INFO: Unable to read jessie_udp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.613: INFO: Unable to read jessie_tcp@dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.638: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.642: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local from pod dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5: the server could not find the requested resource (get pods dns-test-93efde31-7583-497a-8f90-04214cdc5ce5)
Apr 29 13:51:01.657: INFO: Lookups using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 failed for: [wheezy_udp@dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@dns-test-service.dns-2687.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_udp@dns-test-service.dns-2687.svc.cluster.local jessie_tcp@dns-test-service.dns-2687.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2687.svc.cluster.local]

Apr 29 13:51:06.651: INFO: DNS probes using dns-2687/dns-test-93efde31-7583-497a-8f90-04214cdc5ce5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:51:07.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2687" for this suite.

• [SLOW TEST:41.180 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":290,"completed":117,"skipped":1841,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:51:07.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:51:20.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7452" for this suite.

• [SLOW TEST:13.225 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":290,"completed":118,"skipped":1843,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:51:20.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-b7274c39-c04f-4945-b53a-6f69ed795afd
STEP: Creating a pod to test consume configMaps
Apr 29 13:51:20.684: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298" in namespace "projected-2123" to be "Succeeded or Failed"
Apr 29 13:51:20.758: INFO: Pod "pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298": Phase="Pending", Reason="", readiness=false. Elapsed: 74.035966ms
Apr 29 13:51:22.763: INFO: Pod "pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078647885s
Apr 29 13:51:24.768: INFO: Pod "pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083299519s
STEP: Saw pod success
Apr 29 13:51:24.768: INFO: Pod "pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298" satisfied condition "Succeeded or Failed"
Apr 29 13:51:24.771: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 13:51:24.823: INFO: Waiting for pod pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298 to disappear
Apr 29 13:51:24.838: INFO: Pod pod-projected-configmaps-2f28ed7d-4096-46f3-8724-57c3ebfc0298 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:51:24.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2123" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":119,"skipped":1862,"failed":0}
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:51:24.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-28724f1a-afc5-4cc9-8d33-00f74cf85968
STEP: Creating secret with name secret-projected-all-test-volume-b8915bab-7242-43eb-8e5d-3839d880f719
STEP: Creating a pod to test Check all projections for projected volume plugin
Apr 29 13:51:24.954: INFO: Waiting up to 5m0s for pod "projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6" in namespace "projected-503" to be "Succeeded or Failed"
Apr 29 13:51:24.963: INFO: Pod "projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807821ms
Apr 29 13:51:26.967: INFO: Pod "projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01295309s
Apr 29 13:51:28.972: INFO: Pod "projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017638738s
STEP: Saw pod success
Apr 29 13:51:28.972: INFO: Pod "projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6" satisfied condition "Succeeded or Failed"
Apr 29 13:51:28.974: INFO: Trying to get logs from node kali-worker2 pod projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6 container projected-all-volume-test: 
STEP: delete the pod
Apr 29 13:51:29.010: INFO: Waiting for pod projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6 to disappear
Apr 29 13:51:29.035: INFO: Pod projected-volume-fce2d950-8f22-40ce-a8c6-ab03d5cb02c6 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:51:29.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-503" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":290,"completed":120,"skipped":1865,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:51:29.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:51:29.138: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:51:31.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:51:33.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:35.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:37.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:39.143: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:41.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:43.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:45.141: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:47.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = false)
Apr 29 13:51:49.142: INFO: The status of Pod test-webserver-78a5fb87-b537-4f67-9c84-3b3bebd7a563 is Running (Ready = true)
Apr 29 13:51:49.146: INFO: Container started at 2020-04-29 13:51:31 +0000 UTC, pod became ready at 2020-04-29 13:51:48 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:51:49.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6578" for this suite.

• [SLOW TEST:20.089 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":290,"completed":121,"skipped":1901,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:51:49.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Apr 29 13:51:49.241: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:52:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9637" for this suite.

• [SLOW TEST:14.624 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":290,"completed":122,"skipped":1924,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:52:03.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-8123
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 13:52:03.860: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 13:52:03.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:52:06.005: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:52:07.979: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:09.977: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:11.977: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:13.979: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:15.978: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:17.979: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:52:19.979: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 13:52:19.984: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 13:52:24.049: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.93:8080/dial?request=hostname&protocol=udp&host=10.244.2.86&port=8081&tries=1'] Namespace:pod-network-test-8123 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:52:24.049: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:52:24.090002       7 log.go:172] (0xc002e164d0) (0xc000b7c960) Create stream
I0429 13:52:24.090036       7 log.go:172] (0xc002e164d0) (0xc000b7c960) Stream added, broadcasting: 1
I0429 13:52:24.095749       7 log.go:172] (0xc002e164d0) Reply frame received for 1
I0429 13:52:24.095792       7 log.go:172] (0xc002e164d0) (0xc000b7ca00) Create stream
I0429 13:52:24.095809       7 log.go:172] (0xc002e164d0) (0xc000b7ca00) Stream added, broadcasting: 3
I0429 13:52:24.096759       7 log.go:172] (0xc002e164d0) Reply frame received for 3
I0429 13:52:24.096800       7 log.go:172] (0xc002e164d0) (0xc0012d10e0) Create stream
I0429 13:52:24.096819       7 log.go:172] (0xc002e164d0) (0xc0012d10e0) Stream added, broadcasting: 5
I0429 13:52:24.098156       7 log.go:172] (0xc002e164d0) Reply frame received for 5
I0429 13:52:24.206943       7 log.go:172] (0xc002e164d0) Data frame received for 3
I0429 13:52:24.206976       7 log.go:172] (0xc000b7ca00) (3) Data frame handling
I0429 13:52:24.206993       7 log.go:172] (0xc000b7ca00) (3) Data frame sent
I0429 13:52:24.207388       7 log.go:172] (0xc002e164d0) Data frame received for 5
I0429 13:52:24.207433       7 log.go:172] (0xc0012d10e0) (5) Data frame handling
I0429 13:52:24.207468       7 log.go:172] (0xc002e164d0) Data frame received for 3
I0429 13:52:24.207488       7 log.go:172] (0xc000b7ca00) (3) Data frame handling
I0429 13:52:24.208920       7 log.go:172] (0xc002e164d0) Data frame received for 1
I0429 13:52:24.208935       7 log.go:172] (0xc000b7c960) (1) Data frame handling
I0429 13:52:24.208946       7 log.go:172] (0xc000b7c960) (1) Data frame sent
I0429 13:52:24.208956       7 log.go:172] (0xc002e164d0) (0xc000b7c960) Stream removed, broadcasting: 1
I0429 13:52:24.209053       7 log.go:172] (0xc002e164d0) (0xc000b7c960) Stream removed, broadcasting: 1
I0429 13:52:24.209066       7 log.go:172] (0xc002e164d0) (0xc000b7ca00) Stream removed, broadcasting: 3
I0429 13:52:24.209080       7 log.go:172] (0xc002e164d0) (0xc0012d10e0) Stream removed, broadcasting: 5
Apr 29 13:52:24.209: INFO: Waiting for responses: map[]
I0429 13:52:24.209573       7 log.go:172] (0xc002e164d0) Go away received
Apr 29 13:52:24.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.93:8080/dial?request=hostname&protocol=udp&host=10.244.1.92&port=8081&tries=1'] Namespace:pod-network-test-8123 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:52:24.212: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:52:24.273755       7 log.go:172] (0xc0019b8370) (0xc0012d1680) Create stream
I0429 13:52:24.273785       7 log.go:172] (0xc0019b8370) (0xc0012d1680) Stream added, broadcasting: 1
I0429 13:52:24.275722       7 log.go:172] (0xc0019b8370) Reply frame received for 1
I0429 13:52:24.275754       7 log.go:172] (0xc0019b8370) (0xc000770280) Create stream
I0429 13:52:24.275766       7 log.go:172] (0xc0019b8370) (0xc000770280) Stream added, broadcasting: 3
I0429 13:52:24.276533       7 log.go:172] (0xc0019b8370) Reply frame received for 3
I0429 13:52:24.276581       7 log.go:172] (0xc0019b8370) (0xc000b7d680) Create stream
I0429 13:52:24.276599       7 log.go:172] (0xc0019b8370) (0xc000b7d680) Stream added, broadcasting: 5
I0429 13:52:24.277558       7 log.go:172] (0xc0019b8370) Reply frame received for 5
I0429 13:52:24.349694       7 log.go:172] (0xc0019b8370) Data frame received for 3
I0429 13:52:24.349721       7 log.go:172] (0xc000770280) (3) Data frame handling
I0429 13:52:24.349743       7 log.go:172] (0xc000770280) (3) Data frame sent
I0429 13:52:24.350151       7 log.go:172] (0xc0019b8370) Data frame received for 3
I0429 13:52:24.350166       7 log.go:172] (0xc000770280) (3) Data frame handling
I0429 13:52:24.350192       7 log.go:172] (0xc0019b8370) Data frame received for 5
I0429 13:52:24.350230       7 log.go:172] (0xc000b7d680) (5) Data frame handling
I0429 13:52:24.352404       7 log.go:172] (0xc0019b8370) Data frame received for 1
I0429 13:52:24.352431       7 log.go:172] (0xc0012d1680) (1) Data frame handling
I0429 13:52:24.352449       7 log.go:172] (0xc0012d1680) (1) Data frame sent
I0429 13:52:24.352482       7 log.go:172] (0xc0019b8370) (0xc0012d1680) Stream removed, broadcasting: 1
I0429 13:52:24.352526       7 log.go:172] (0xc0019b8370) Go away received
I0429 13:52:24.352551       7 log.go:172] (0xc0019b8370) (0xc0012d1680) Stream removed, broadcasting: 1
I0429 13:52:24.352569       7 log.go:172] (0xc0019b8370) (0xc000770280) Stream removed, broadcasting: 3
I0429 13:52:24.352580       7 log.go:172] (0xc0019b8370) (0xc000b7d680) Stream removed, broadcasting: 5
Apr 29 13:52:24.352: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:52:24.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8123" for this suite.

• [SLOW TEST:20.580 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":290,"completed":123,"skipped":1925,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:52:24.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:52:40.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1662" for this suite.

• [SLOW TEST:16.146 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":290,"completed":124,"skipped":1930,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:52:40.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
Apr 29 13:52:40.553: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:52:47.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9321" for this suite.

• [SLOW TEST:7.355 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":290,"completed":125,"skipped":1972,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:52:47.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-3324
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3324 to expose endpoints map[]
Apr 29 13:52:48.003: INFO: Get endpoints failed (10.115376ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Apr 29 13:52:49.512: INFO: successfully validated that service multi-endpoint-test in namespace services-3324 exposes endpoints map[] (1.519717557s elapsed)
STEP: Creating pod pod1 in namespace services-3324
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3324 to expose endpoints map[pod1:[100]]
Apr 29 13:52:54.355: INFO: successfully validated that service multi-endpoint-test in namespace services-3324 exposes endpoints map[pod1:[100]] (4.73199059s elapsed)
STEP: Creating pod pod2 in namespace services-3324
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3324 to expose endpoints map[pod1:[100] pod2:[101]]
Apr 29 13:52:58.538: INFO: successfully validated that service multi-endpoint-test in namespace services-3324 exposes endpoints map[pod1:[100] pod2:[101]] (4.17874305s elapsed)
STEP: Deleting pod pod1 in namespace services-3324
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3324 to expose endpoints map[pod2:[101]]
Apr 29 13:52:59.675: INFO: successfully validated that service multi-endpoint-test in namespace services-3324 exposes endpoints map[pod2:[101]] (1.13257761s elapsed)
STEP: Deleting pod pod2 in namespace services-3324
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3324 to expose endpoints map[]
Apr 29 13:53:00.808: INFO: successfully validated that service multi-endpoint-test in namespace services-3324 exposes endpoints map[] (1.128982147s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:53:00.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3324" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:13.058 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":290,"completed":126,"skipped":1980,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:53:00.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-8071
STEP: creating service affinity-nodeport in namespace services-8071
STEP: creating replication controller affinity-nodeport in namespace services-8071
I0429 13:53:01.106586       7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8071, replica count: 3
I0429 13:53:04.156981       7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:53:07.157343       7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:53:07.168: INFO: Creating new exec pod
Apr 29 13:53:14.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8071 execpod-affinity8vxdm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80'
Apr 29 13:53:14.433: INFO: stderr: "I0429 13:53:14.333585    2071 log.go:172] (0xc00041bc30) (0xc00069cd20) Create stream\nI0429 13:53:14.333635    2071 log.go:172] (0xc00041bc30) (0xc00069cd20) Stream added, broadcasting: 1\nI0429 13:53:14.335832    2071 log.go:172] (0xc00041bc30) Reply frame received for 1\nI0429 13:53:14.335860    2071 log.go:172] (0xc00041bc30) (0xc0006d2640) Create stream\nI0429 13:53:14.335867    2071 log.go:172] (0xc00041bc30) (0xc0006d2640) Stream added, broadcasting: 3\nI0429 13:53:14.336864    2071 log.go:172] (0xc00041bc30) Reply frame received for 3\nI0429 13:53:14.336904    2071 log.go:172] (0xc00041bc30) (0xc000236320) Create stream\nI0429 13:53:14.336917    2071 log.go:172] (0xc00041bc30) (0xc000236320) Stream added, broadcasting: 5\nI0429 13:53:14.338297    2071 log.go:172] (0xc00041bc30) Reply frame received for 5\nI0429 13:53:14.426472    2071 log.go:172] (0xc00041bc30) Data frame received for 5\nI0429 13:53:14.426507    2071 log.go:172] (0xc000236320) (5) Data frame handling\nI0429 13:53:14.426522    2071 log.go:172] (0xc000236320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0429 13:53:14.427079    2071 log.go:172] (0xc00041bc30) Data frame received for 5\nI0429 13:53:14.427099    2071 log.go:172] (0xc000236320) (5) Data frame handling\nI0429 13:53:14.427112    2071 log.go:172] (0xc000236320) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0429 13:53:14.427932    2071 log.go:172] (0xc00041bc30) Data frame received for 5\nI0429 13:53:14.427950    2071 log.go:172] (0xc000236320) (5) Data frame handling\nI0429 13:53:14.427995    2071 log.go:172] (0xc00041bc30) Data frame received for 3\nI0429 13:53:14.428032    2071 log.go:172] (0xc0006d2640) (3) Data frame handling\nI0429 13:53:14.429581    2071 log.go:172] (0xc00041bc30) Data frame received for 1\nI0429 13:53:14.429596    2071 log.go:172] (0xc00069cd20) (1) Data frame handling\nI0429 13:53:14.429616    2071 log.go:172] (0xc00069cd20) (1) Data frame sent\nI0429 13:53:14.429753    2071 log.go:172] (0xc00041bc30) (0xc00069cd20) Stream removed, broadcasting: 1\nI0429 13:53:14.429784    2071 log.go:172] (0xc00041bc30) Go away received\nI0429 13:53:14.430056    2071 log.go:172] (0xc00041bc30) (0xc00069cd20) Stream removed, broadcasting: 1\nI0429 13:53:14.430069    2071 log.go:172] (0xc00041bc30) (0xc0006d2640) Stream removed, broadcasting: 3\nI0429 13:53:14.430076    2071 log.go:172] (0xc00041bc30) (0xc000236320) Stream removed, broadcasting: 5\n"
Apr 29 13:53:14.434: INFO: stdout: ""
Apr 29 13:53:14.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8071 execpod-affinity8vxdm -- /bin/sh -x -c nc -zv -t -w 2 10.98.146.3 80'
Apr 29 13:53:14.648: INFO: stderr: "I0429 13:53:14.568137    2093 log.go:172] (0xc0006b4160) (0xc0006c80a0) Create stream\nI0429 13:53:14.568195    2093 log.go:172] (0xc0006b4160) (0xc0006c80a0) Stream added, broadcasting: 1\nI0429 13:53:14.573327    2093 log.go:172] (0xc0006b4160) Reply frame received for 1\nI0429 13:53:14.573377    2093 log.go:172] (0xc0006b4160) (0xc00061ed20) Create stream\nI0429 13:53:14.573392    2093 log.go:172] (0xc0006b4160) (0xc00061ed20) Stream added, broadcasting: 3\nI0429 13:53:14.574439    2093 log.go:172] (0xc0006b4160) Reply frame received for 3\nI0429 13:53:14.574473    2093 log.go:172] (0xc0006b4160) (0xc000556460) Create stream\nI0429 13:53:14.574485    2093 log.go:172] (0xc0006b4160) (0xc000556460) Stream added, broadcasting: 5\nI0429 13:53:14.575291    2093 log.go:172] (0xc0006b4160) Reply frame received for 5\nI0429 13:53:14.642722    2093 log.go:172] (0xc0006b4160) Data frame received for 3\nI0429 13:53:14.642748    2093 log.go:172] (0xc00061ed20) (3) Data frame handling\nI0429 13:53:14.642766    2093 log.go:172] (0xc0006b4160) Data frame received for 5\nI0429 13:53:14.642772    2093 log.go:172] (0xc000556460) (5) Data frame handling\nI0429 13:53:14.642779    2093 log.go:172] (0xc000556460) (5) Data frame sent\nI0429 13:53:14.642785    2093 log.go:172] (0xc0006b4160) Data frame received for 5\nI0429 13:53:14.642793    2093 log.go:172] (0xc000556460) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.146.3 80\nConnection to 10.98.146.3 80 port [tcp/http] succeeded!\nI0429 13:53:14.644452    2093 log.go:172] (0xc0006b4160) Data frame received for 1\nI0429 13:53:14.644470    2093 log.go:172] (0xc0006c80a0) (1) Data frame handling\nI0429 13:53:14.644481    2093 log.go:172] (0xc0006c80a0) (1) Data frame sent\nI0429 13:53:14.644494    2093 log.go:172] (0xc0006b4160) (0xc0006c80a0) Stream removed, broadcasting: 1\nI0429 13:53:14.644615    2093 log.go:172] (0xc0006b4160) Go away received\nI0429 13:53:14.644817    2093 log.go:172] (0xc0006b4160) (0xc0006c80a0) Stream removed, broadcasting: 1\nI0429 13:53:14.644835    2093 log.go:172] (0xc0006b4160) (0xc00061ed20) Stream removed, broadcasting: 3\nI0429 13:53:14.644841    2093 log.go:172] (0xc0006b4160) (0xc000556460) Stream removed, broadcasting: 5\n"
Apr 29 13:53:14.648: INFO: stdout: ""
Apr 29 13:53:14.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8071 execpod-affinity8vxdm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31884'
Apr 29 13:53:14.936: INFO: stderr: "I0429 13:53:14.813461    2114 log.go:172] (0xc000a43290) (0xc000b660a0) Create stream\nI0429 13:53:14.813553    2114 log.go:172] (0xc000a43290) (0xc000b660a0) Stream added, broadcasting: 1\nI0429 13:53:14.817829    2114 log.go:172] (0xc000a43290) Reply frame received for 1\nI0429 13:53:14.817870    2114 log.go:172] (0xc000a43290) (0xc000705ea0) Create stream\nI0429 13:53:14.817885    2114 log.go:172] (0xc000a43290) (0xc000705ea0) Stream added, broadcasting: 3\nI0429 13:53:14.818666    2114 log.go:172] (0xc000a43290) Reply frame received for 3\nI0429 13:53:14.818688    2114 log.go:172] (0xc000a43290) (0xc00058c500) Create stream\nI0429 13:53:14.818696    2114 log.go:172] (0xc000a43290) (0xc00058c500) Stream added, broadcasting: 5\nI0429 13:53:14.819576    2114 log.go:172] (0xc000a43290) Reply frame received for 5\nI0429 13:53:14.930952    2114 log.go:172] (0xc000a43290) Data frame received for 3\nI0429 13:53:14.930974    2114 log.go:172] (0xc000705ea0) (3) Data frame handling\nI0429 13:53:14.930989    2114 log.go:172] (0xc000a43290) Data frame received for 5\nI0429 13:53:14.931004    2114 log.go:172] (0xc00058c500) (5) Data frame handling\nI0429 13:53:14.931012    2114 log.go:172] (0xc00058c500) (5) Data frame sent\nI0429 13:53:14.931019    2114 log.go:172] (0xc000a43290) Data frame received for 5\nI0429 13:53:14.931025    2114 log.go:172] (0xc00058c500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31884\nConnection to 172.17.0.15 31884 port [tcp/31884] succeeded!\nI0429 13:53:14.932467    2114 log.go:172] (0xc000a43290) Data frame received for 1\nI0429 13:53:14.932487    2114 log.go:172] (0xc000b660a0) (1) Data frame handling\nI0429 13:53:14.932508    2114 log.go:172] (0xc000b660a0) (1) Data frame sent\nI0429 13:53:14.932525    2114 log.go:172] (0xc000a43290) (0xc000b660a0) Stream removed, broadcasting: 1\nI0429 13:53:14.932680    2114 log.go:172] (0xc000a43290) Go away received\nI0429 13:53:14.932875    2114 log.go:172] (0xc000a43290) (0xc000b660a0) Stream removed, broadcasting: 1\nI0429 13:53:14.932891    2114 log.go:172] (0xc000a43290) (0xc000705ea0) Stream removed, broadcasting: 3\nI0429 13:53:14.932899    2114 log.go:172] (0xc000a43290) (0xc00058c500) Stream removed, broadcasting: 5\n"
Apr 29 13:53:14.936: INFO: stdout: ""
Apr 29 13:53:14.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8071 execpod-affinity8vxdm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31884'
Apr 29 13:53:15.131: INFO: stderr: "I0429 13:53:15.064466    2134 log.go:172] (0xc00003a6e0) (0xc0005defa0) Create stream\nI0429 13:53:15.064541    2134 log.go:172] (0xc00003a6e0) (0xc0005defa0) Stream added, broadcasting: 1\nI0429 13:53:15.067309    2134 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0429 13:53:15.067347    2134 log.go:172] (0xc00003a6e0) (0xc000446d20) Create stream\nI0429 13:53:15.067357    2134 log.go:172] (0xc00003a6e0) (0xc000446d20) Stream added, broadcasting: 3\nI0429 13:53:15.068095    2134 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0429 13:53:15.068129    2134 log.go:172] (0xc00003a6e0) (0xc0005dfc20) Create stream\nI0429 13:53:15.068139    2134 log.go:172] (0xc00003a6e0) (0xc0005dfc20) Stream added, broadcasting: 5\nI0429 13:53:15.068795    2134 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0429 13:53:15.125268    2134 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0429 13:53:15.125309    2134 log.go:172] (0xc0005dfc20) (5) Data frame handling\nI0429 13:53:15.125341    2134 log.go:172] (0xc0005dfc20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 31884\nI0429 13:53:15.125371    2134 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0429 13:53:15.125394    2134 log.go:172] (0xc0005dfc20) (5) Data frame handling\nI0429 13:53:15.125410    2134 log.go:172] (0xc0005dfc20) (5) Data frame sent\nConnection to 172.17.0.18 31884 port [tcp/31884] succeeded!\nI0429 13:53:15.125695    2134 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0429 13:53:15.125719    2134 log.go:172] (0xc0005dfc20) (5) Data frame handling\nI0429 13:53:15.125753    2134 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0429 13:53:15.125786    2134 log.go:172] (0xc000446d20) (3) Data frame handling\nI0429 13:53:15.127443    2134 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0429 13:53:15.127483    2134 log.go:172] (0xc0005defa0) (1) Data frame handling\nI0429 13:53:15.127524    2134 log.go:172] (0xc0005defa0) (1) Data frame sent\nI0429 13:53:15.127633    2134 log.go:172] (0xc00003a6e0) (0xc0005defa0) Stream removed, broadcasting: 1\nI0429 13:53:15.127693    2134 log.go:172] (0xc00003a6e0) Go away received\nI0429 13:53:15.127904    2134 log.go:172] (0xc00003a6e0) (0xc0005defa0) Stream removed, broadcasting: 1\nI0429 13:53:15.127921    2134 log.go:172] (0xc00003a6e0) (0xc000446d20) Stream removed, broadcasting: 3\nI0429 13:53:15.127935    2134 log.go:172] (0xc00003a6e0) (0xc0005dfc20) Stream removed, broadcasting: 5\n"
Apr 29 13:53:15.131: INFO: stdout: ""
Apr 29 13:53:15.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8071 execpod-affinity8vxdm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.15:31884/ ; done'
Apr 29 13:53:15.415: INFO: stderr: "I0429 13:53:15.260843    2155 log.go:172] (0xc00057e4d0) (0xc000348140) Create stream\nI0429 13:53:15.260914    2155 log.go:172] (0xc00057e4d0) (0xc000348140) Stream added, broadcasting: 1\nI0429 13:53:15.262878    2155 log.go:172] (0xc00057e4d0) Reply frame received for 1\nI0429 13:53:15.262917    2155 log.go:172] (0xc00057e4d0) (0xc0002ff540) Create stream\nI0429 13:53:15.262928    2155 log.go:172] (0xc00057e4d0) (0xc0002ff540) Stream added, broadcasting: 3\nI0429 13:53:15.263701    2155 log.go:172] (0xc00057e4d0) Reply frame received for 3\nI0429 13:53:15.263745    2155 log.go:172] (0xc00057e4d0) (0xc0001397c0) Create stream\nI0429 13:53:15.263766    2155 log.go:172] (0xc00057e4d0) (0xc0001397c0) Stream added, broadcasting: 5\nI0429 13:53:15.264699    2155 log.go:172] (0xc00057e4d0) Reply frame received for 5\nI0429 13:53:15.332069    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.332099    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.332107    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.332117    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.332122    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.332128    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.332921    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.332938    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.332948    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.333678    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.333698    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.333707    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.333713    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.333720    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.333758    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.333784    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.333796    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.333807    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.338090    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.338109    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.338129    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.338422    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.338433    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.338443    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.338470    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.338492    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.338508    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.341559    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.341571    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.341589    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.341820    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.341837    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.341843    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.341851    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.341855    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.341860    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.346925    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.346937    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.346946    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.347420    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.347436    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.347443    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.347523    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.347534    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.347544    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.352157    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.352186    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.352210    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.352723    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.352743    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.352750    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.352756    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\n+ echo\n+ curl -q -sI0429 13:53:15.352772    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.352818    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.352840    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.352859    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.352867    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.358109    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.358128    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.358145    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.358586    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.358602    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.358611    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.358765    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.358785    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.358801    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.362246    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.362264    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.362290    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.362751    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.362778    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.362789    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.362802    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.362810    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.362827    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.367385    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.367412    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.367437    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.367927    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.367955    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.367995    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.368014    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.368026    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.368043    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.371960    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.371977    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.371987    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.372455    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.372501    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.372520    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.372535    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.372549    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.372570    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.378091    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.378107    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.378118    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.378459    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.378480    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.378498    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0429 13:53:15.378510    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.378517    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.378527    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.378552    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\n http://172.17.0.15:31884/\nI0429 13:53:15.378560    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.378583    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.382257    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.382275    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.382296    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.382843    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.382875    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.382889    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.382916    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.382929    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.382939    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.386580    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.386599    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.386612    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.386939    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.386957    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.386978    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0429 13:53:15.386992    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.387030    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.387048    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n 2 http://172.17.0.15:31884/\nI0429 13:53:15.387071    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.387090    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.387101    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.391164    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.391183    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.391195    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.391519    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.391546    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.391563    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.391581    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.391595    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.391625    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\nI0429 13:53:15.391661    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.391675    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.391697    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.395604    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.395618    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.395627    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.396041    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.396066    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.396092    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.396219    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.396262    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.396301    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.400243    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.400259    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.400267    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.400673    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.400684    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.400692    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.400708    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.400725    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.400742    2155 log.go:172] (0xc0001397c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:31884/\nI0429 13:53:15.406354    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.406372    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.406391    2155 log.go:172] (0xc0002ff540) (3) Data frame sent\nI0429 13:53:15.407344    2155 log.go:172] (0xc00057e4d0) Data frame received for 3\nI0429 13:53:15.407391    2155 log.go:172] (0xc0002ff540) (3) Data frame handling\nI0429 13:53:15.407490    2155 log.go:172] (0xc00057e4d0) Data frame received for 5\nI0429 13:53:15.407503    2155 log.go:172] (0xc0001397c0) (5) Data frame handling\nI0429 13:53:15.410131    2155 log.go:172] (0xc00057e4d0) Data frame received for 1\nI0429 13:53:15.410165    2155 log.go:172] (0xc000348140) (1) Data frame handling\nI0429 13:53:15.410208    2155 log.go:172] (0xc000348140) (1) Data frame sent\nI0429 13:53:15.410233    2155 log.go:172] (0xc00057e4d0) (0xc000348140) Stream removed, broadcasting: 1\nI0429 13:53:15.410250    2155 log.go:172] (0xc00057e4d0) Go away received\nI0429 13:53:15.410761    2155 log.go:172] (0xc00057e4d0) (0xc000348140) Stream removed, broadcasting: 1\nI0429 13:53:15.410781    2155 log.go:172] (0xc00057e4d0) (0xc0002ff540) Stream removed, broadcasting: 3\nI0429 13:53:15.410792    2155 log.go:172] (0xc00057e4d0) (0xc0001397c0) Stream removed, broadcasting: 5\n"
Apr 29 13:53:15.416: INFO: stdout: "\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd\naffinity-nodeport-95hsd"
Apr 29 13:53:15.416: INFO: Received response from host: 
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Received response from host: affinity-nodeport-95hsd
Apr 29 13:53:15.416: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-nodeport in namespace services-8071, will wait for the garbage collector to delete the pods
Apr 29 13:53:15.545: INFO: Deleting ReplicationController affinity-nodeport took: 7.416966ms
Apr 29 13:53:15.945: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.249651ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:53:23.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8071" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:23.020 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":127,"skipped":2001,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:53:23.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name cm-test-opt-del-bced82d5-0563-4565-ba37-942216c207a5
STEP: Creating configMap with name cm-test-opt-upd-f8d32aa7-70f1-4d39-9c1b-a7e0424b14dd
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-bced82d5-0563-4565-ba37-942216c207a5
STEP: Updating configmap cm-test-opt-upd-f8d32aa7-70f1-4d39-9c1b-a7e0424b14dd
STEP: Creating configMap with name cm-test-opt-create-98dfeb1a-c224-40db-947f-65a7f1e2e2b4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:53:34.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4468" for this suite.

• [SLOW TEST:10.285 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":128,"skipped":2013,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:53:34.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Apr 29 13:53:34.356: INFO: Waiting up to 5m0s for pod "downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc" in namespace "downward-api-640" to be "Succeeded or Failed"
Apr 29 13:53:34.364: INFO: Pod "downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011466ms
Apr 29 13:53:36.382: INFO: Pod "downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026840137s
Apr 29 13:53:38.387: INFO: Pod "downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031396814s
STEP: Saw pod success
Apr 29 13:53:38.387: INFO: Pod "downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc" satisfied condition "Succeeded or Failed"
Apr 29 13:53:38.391: INFO: Trying to get logs from node kali-worker pod downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc container dapi-container: 
STEP: delete the pod
Apr 29 13:53:38.443: INFO: Waiting for pod downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc to disappear
Apr 29 13:53:38.449: INFO: Pod downward-api-8cb17781-1fa8-460f-aade-f6a6694243bc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:53:38.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-640" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":290,"completed":129,"skipped":2017,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:53:38.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating server pod server in namespace prestop-959
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-959
STEP: Deleting pre-stop pod
Apr 29 13:53:55.614: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:53:55.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-959" for this suite.

• [SLOW TEST:17.190 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":290,"completed":130,"skipped":2020,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:53:55.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
Apr 29 13:53:55.726: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:04.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2" for this suite.

• [SLOW TEST:8.712 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":290,"completed":131,"skipped":2028,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:04.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:54:04.414: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:05.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8503" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":290,"completed":132,"skipped":2040,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:05.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:54:06.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1" in namespace "projected-1158" to be "Succeeded or Failed"
Apr 29 13:54:06.317: INFO: Pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1": Phase="Pending", Reason="", readiness=false. Elapsed: 195.30318ms
Apr 29 13:54:08.349: INFO: Pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227633365s
Apr 29 13:54:10.354: INFO: Pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232350605s
Apr 29 13:54:12.359: INFO: Pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236924205s
STEP: Saw pod success
Apr 29 13:54:12.359: INFO: Pod "downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1" satisfied condition "Succeeded or Failed"
Apr 29 13:54:12.362: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1 container client-container: 
STEP: delete the pod
Apr 29 13:54:12.403: INFO: Waiting for pod downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1 to disappear
Apr 29 13:54:12.413: INFO: Pod downwardapi-volume-d1e6a383-1085-4234-920e-5dd0aa1b48d1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:12.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1158" for this suite.

• [SLOW TEST:6.803 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":290,"completed":133,"skipped":2065,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:12.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-39c19bf3-51ba-4bfc-b1c8-c2bf0ca74fda
STEP: Creating a pod to test consume configMaps
Apr 29 13:54:12.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469" in namespace "configmap-2629" to be "Succeeded or Failed"
Apr 29 13:54:12.527: INFO: Pod "pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469": Phase="Pending", Reason="", readiness=false. Elapsed: 16.54782ms
Apr 29 13:54:14.556: INFO: Pod "pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044898239s
Apr 29 13:54:16.646: INFO: Pod "pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13525283s
STEP: Saw pod success
Apr 29 13:54:16.646: INFO: Pod "pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469" satisfied condition "Succeeded or Failed"
Apr 29 13:54:16.649: INFO: Trying to get logs from node kali-worker pod pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469 container configmap-volume-test: 
STEP: delete the pod
Apr 29 13:54:16.734: INFO: Waiting for pod pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469 to disappear
Apr 29 13:54:16.736: INFO: Pod pod-configmaps-d7e1146d-edf5-48c7-bba2-f764be716469 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:16.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2629" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":290,"completed":134,"skipped":2084,"failed":0}

------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:16.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:23.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4644" for this suite.
STEP: Destroying namespace "nsdeletetest-3891" for this suite.
Apr 29 13:54:23.066: INFO: Namespace nsdeletetest-3891 was already deleted
STEP: Destroying namespace "nsdeletetest-2260" for this suite.

• [SLOW TEST:6.325 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":290,"completed":135,"skipped":2084,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:23.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:54:24.090: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:54:26.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765264, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765264, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765264, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765263, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:54:29.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:29.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6242" for this suite.
STEP: Destroying namespace "webhook-6242-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.843 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":290,"completed":136,"skipped":2112,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:29.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:54:30.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1785" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":290,"completed":137,"skipped":2113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:54:30.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4658 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4658;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4658 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4658;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4658.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4658.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4658.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4658.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4658.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4658.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4658.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 219.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.219_udp@PTR;check="$$(dig +tcp +noall +answer +search 219.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.219_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4658 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4658;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4658 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4658;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4658.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4658.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4658.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4658.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4658.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4658.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4658.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4658.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4658.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 219.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.219_udp@PTR;check="$$(dig +tcp +noall +answer +search 219.9.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.9.219_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 13:54:39.356: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.358: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.363: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.367: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.386: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.389: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.392: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.394: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.397: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.400: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.403: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.406: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:39.422: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:54:44.427: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.431: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.445: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.448: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.599: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.602: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.605: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.609: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.615: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.618: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:44.634: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:54:49.427: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.430: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.432: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.444: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.447: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.465: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.468: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.472: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.475: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.478: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.490: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:49.576: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:54:54.427: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.430: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.446: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.494: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.497: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.500: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.504: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.507: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:54.535: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:54:59.428: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.431: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.434: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.436: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.440: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.445: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.465: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.468: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.470: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.475: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.478: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.483: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:54:59.502: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:55:04.427: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.431: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.433: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.437: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.445: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.448: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.470: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.473: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.476: INFO: Unable to read jessie_udp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.478: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658 from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.481: INFO: Unable to read jessie_udp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.488: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.491: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc from pod dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3: the server could not find the requested resource (get pods dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3)
Apr 29 13:55:04.511: INFO: Lookups using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4658 wheezy_tcp@dns-test-service.dns-4658 wheezy_udp@dns-test-service.dns-4658.svc wheezy_tcp@dns-test-service.dns-4658.svc wheezy_udp@_http._tcp.dns-test-service.dns-4658.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4658.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4658 jessie_tcp@dns-test-service.dns-4658 jessie_udp@dns-test-service.dns-4658.svc jessie_tcp@dns-test-service.dns-4658.svc jessie_udp@_http._tcp.dns-test-service.dns-4658.svc jessie_tcp@_http._tcp.dns-test-service.dns-4658.svc]

Apr 29 13:55:09.515: INFO: DNS probes using dns-4658/dns-test-fcf3015c-9d05-4ae8-bc28-5ffe887556c3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:55:10.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4658" for this suite.

• [SLOW TEST:39.784 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":290,"completed":138,"skipped":2144,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:55:10.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-591, will wait for the garbage collector to delete the pods
Apr 29 13:55:16.452: INFO: Deleting Job.batch foo took: 6.820562ms
Apr 29 13:55:16.552: INFO: Terminating Job.batch foo pods took: 100.246467ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:55:53.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-591" for this suite.

• [SLOW TEST:43.551 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":290,"completed":139,"skipped":2153,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:55:53.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:55:53.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4145'
Apr 29 13:55:56.914: INFO: stderr: ""
Apr 29 13:55:56.915: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Apr 29 13:55:56.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4145'
Apr 29 13:55:57.223: INFO: stderr: ""
Apr 29 13:55:57.223: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Apr 29 13:55:58.228: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:55:58.228: INFO: Found 0 / 1
Apr 29 13:55:59.330: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:55:59.330: INFO: Found 0 / 1
Apr 29 13:56:00.228: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:56:00.228: INFO: Found 0 / 1
Apr 29 13:56:01.227: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:56:01.227: INFO: Found 1 / 1
Apr 29 13:56:01.227: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Apr 29 13:56:01.230: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 13:56:01.230: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 29 13:56:01.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-z58mm --namespace=kubectl-4145'
Apr 29 13:56:01.359: INFO: stderr: ""
Apr 29 13:56:01.359: INFO: stdout: "Name:         agnhost-master-z58mm\nNamespace:    kubectl-4145\nPriority:     0\nNode:         kali-worker/172.17.0.15\nStart Time:   Wed, 29 Apr 2020 13:55:57 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.97\nIPs:\n  IP:           10.244.2.97\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://435fc259b5891c9284a800735d56537c0a02a12496529e77a07828c5901653f7\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 29 Apr 2020 13:55:59 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqmnw (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-tqmnw:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-tqmnw\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  5s    default-scheduler     Successfully assigned kubectl-4145/agnhost-master-z58mm to kali-worker\n  Normal  Pulled     3s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, kali-worker  Started container agnhost-master\n"
Apr 29 13:56:01.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4145'
Apr 29 13:56:01.530: INFO: stderr: ""
Apr 29 13:56:01.530: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-4145\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-z58mm\n"
Apr 29 13:56:01.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4145'
Apr 29 13:56:01.640: INFO: stderr: ""
Apr 29 13:56:01.640: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-4145\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.104.111.46\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.97:6379\nSession Affinity:  None\nEvents:            \n"
Apr 29 13:56:01.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Apr 29 13:56:01.795: INFO: stderr: ""
Apr 29 13:56:01.795: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 29 Apr 2020 13:55:55 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 29 Apr 2020 13:51:47 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 29 Apr 2020 13:51:47 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 29 Apr 2020 13:51:47 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 29 Apr 2020 13:51:47 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4h24m\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     4h24m\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4h24m\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      4h24m\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         4h24m\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         4h24m\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4h24m\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         4h24m\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4h24m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Apr 29 13:56:01.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-4145'
Apr 29 13:56:01.902: INFO: stderr: ""
Apr 29 13:56:01.902: INFO: stdout: "Name:         kubectl-4145\nLabels:       e2e-framework=kubectl\n              e2e-run=d29d9444-0c6a-4445-a5ea-ffbefe9e2a77\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:01.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4145" for this suite.

• [SLOW TEST:8.045 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":290,"completed":140,"skipped":2157,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:01.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-33c28474-7f30-42c0-ad21-4d4dd93455a1
STEP: Creating a pod to test consume secrets
Apr 29 13:56:01.980: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9" in namespace "projected-292" to be "Succeeded or Failed"
Apr 29 13:56:01.995: INFO: Pod "pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.285784ms
Apr 29 13:56:03.999: INFO: Pod "pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019829374s
Apr 29 13:56:06.004: INFO: Pod "pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02415972s
STEP: Saw pod success
Apr 29 13:56:06.004: INFO: Pod "pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9" satisfied condition "Succeeded or Failed"
Apr 29 13:56:06.007: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9 container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 13:56:06.058: INFO: Waiting for pod pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9 to disappear
Apr 29 13:56:06.070: INFO: Pod pod-projected-secrets-6814036b-cfe5-498f-ba07-a43c47e079a9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-292" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":141,"skipped":2167,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:06.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
Apr 29 13:56:06.153: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 13:56:06.162: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 13:56:06.164: INFO: 
Logging pods the apiserver thinks is on node kali-worker before test
Apr 29 13:56:06.168: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:56:06.168: INFO: 	Container kindnet-cni ready: true, restart count 1
Apr 29 13:56:06.168: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:56:06.168: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 13:56:06.168: INFO: agnhost-master-z58mm from kubectl-4145 started at 2020-04-29 13:55:57 +0000 UTC (1 container statuses recorded)
Apr 29 13:56:06.168: INFO: 	Container agnhost-master ready: true, restart count 0
Apr 29 13:56:06.168: INFO: 
Logging pods the apiserver thinks is on node kali-worker2 before test
Apr 29 13:56:06.172: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:56:06.172: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 13:56:06.173: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 13:56:06.173: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.160a4ee580e7efa5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:07.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4760" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":290,"completed":142,"skipped":2175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:07.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 13:56:07.932: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:08.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5010" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":290,"completed":143,"skipped":2197,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:08.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311
STEP: creating the pod
Apr 29 13:56:08.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1650'
Apr 29 13:56:10.129: INFO: stderr: ""
Apr 29 13:56:10.129: INFO: stdout: "pod/pause created\n"
Apr 29 13:56:10.129: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Apr 29 13:56:10.129: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1650" to be "running and ready"
Apr 29 13:56:10.246: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 117.467664ms
Apr 29 13:56:12.265: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136442812s
Apr 29 13:56:14.277: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.148145043s
Apr 29 13:56:14.277: INFO: Pod "pause" satisfied condition "running and ready"
Apr 29 13:56:14.277: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: adding the label testing-label with value testing-label-value to a pod
Apr 29 13:56:14.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1650'
Apr 29 13:56:14.396: INFO: stderr: ""
Apr 29 13:56:14.396: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Apr 29 13:56:14.396: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1650'
Apr 29 13:56:14.515: INFO: stderr: ""
Apr 29 13:56:14.515: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Apr 29 13:56:14.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1650'
Apr 29 13:56:14.625: INFO: stderr: ""
Apr 29 13:56:14.625: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Apr 29 13:56:14.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1650'
Apr 29 13:56:14.742: INFO: stderr: ""
Apr 29 13:56:14.742: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318
STEP: using delete to clean up resources
Apr 29 13:56:14.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1650'
Apr 29 13:56:14.886: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 13:56:14.886: INFO: stdout: "pod \"pause\" force deleted\n"
Apr 29 13:56:14.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1650'
Apr 29 13:56:15.214: INFO: stderr: "No resources found in kubectl-1650 namespace.\n"
Apr 29 13:56:15.214: INFO: stdout: ""
Apr 29 13:56:15.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1650 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 13:56:15.306: INFO: stderr: ""
Apr 29 13:56:15.306: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:15.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1650" for this suite.

• [SLOW TEST:6.544 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":290,"completed":144,"skipped":2224,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:15.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Apr 29 13:56:15.363: INFO: Waiting up to 5m0s for pod "client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079" in namespace "containers-4784" to be "Succeeded or Failed"
Apr 29 13:56:15.381: INFO: Pod "client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079": Phase="Pending", Reason="", readiness=false. Elapsed: 18.200786ms
Apr 29 13:56:17.385: INFO: Pod "client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022226613s
Apr 29 13:56:19.389: INFO: Pod "client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026182189s
STEP: Saw pod success
Apr 29 13:56:19.389: INFO: Pod "client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079" satisfied condition "Succeeded or Failed"
Apr 29 13:56:19.392: INFO: Trying to get logs from node kali-worker pod client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079 container test-container: 
STEP: delete the pod
Apr 29 13:56:19.455: INFO: Waiting for pod client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079 to disappear
Apr 29 13:56:19.471: INFO: Pod client-containers-4073b4f1-bf86-4bb5-b3d4-b0b7ec1d3079 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:19.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4784" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":290,"completed":145,"skipped":2242,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:19.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-8313
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 13:56:19.584: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 13:56:19.670: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:56:21.675: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 13:56:23.676: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:25.673: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:27.675: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:29.674: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:31.674: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:33.674: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:35.674: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:37.674: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 13:56:39.674: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 13:56:39.680: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 13:56:43.789: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.100:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8313 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:56:43.789: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:56:43.827003       7 log.go:172] (0xc003c79340) (0xc001b24d20) Create stream
I0429 13:56:43.827033       7 log.go:172] (0xc003c79340) (0xc001b24d20) Stream added, broadcasting: 1
I0429 13:56:43.830061       7 log.go:172] (0xc003c79340) Reply frame received for 1
I0429 13:56:43.830130       7 log.go:172] (0xc003c79340) (0xc00233bea0) Create stream
I0429 13:56:43.830156       7 log.go:172] (0xc003c79340) (0xc00233bea0) Stream added, broadcasting: 3
I0429 13:56:43.831463       7 log.go:172] (0xc003c79340) Reply frame received for 3
I0429 13:56:43.831500       7 log.go:172] (0xc003c79340) (0xc001b24dc0) Create stream
I0429 13:56:43.831515       7 log.go:172] (0xc003c79340) (0xc001b24dc0) Stream added, broadcasting: 5
I0429 13:56:43.832567       7 log.go:172] (0xc003c79340) Reply frame received for 5
I0429 13:56:43.927475       7 log.go:172] (0xc003c79340) Data frame received for 5
I0429 13:56:43.927519       7 log.go:172] (0xc001b24dc0) (5) Data frame handling
I0429 13:56:43.927551       7 log.go:172] (0xc003c79340) Data frame received for 3
I0429 13:56:43.927565       7 log.go:172] (0xc00233bea0) (3) Data frame handling
I0429 13:56:43.927578       7 log.go:172] (0xc00233bea0) (3) Data frame sent
I0429 13:56:43.927589       7 log.go:172] (0xc003c79340) Data frame received for 3
I0429 13:56:43.927602       7 log.go:172] (0xc00233bea0) (3) Data frame handling
I0429 13:56:43.929874       7 log.go:172] (0xc003c79340) Data frame received for 1
I0429 13:56:43.929890       7 log.go:172] (0xc001b24d20) (1) Data frame handling
I0429 13:56:43.929897       7 log.go:172] (0xc001b24d20) (1) Data frame sent
I0429 13:56:43.929908       7 log.go:172] (0xc003c79340) (0xc001b24d20) Stream removed, broadcasting: 1
I0429 13:56:43.929926       7 log.go:172] (0xc003c79340) Go away received
I0429 13:56:43.930019       7 log.go:172] (0xc003c79340) (0xc001b24d20) Stream removed, broadcasting: 1
I0429 13:56:43.930034       7 log.go:172] (0xc003c79340) (0xc00233bea0) Stream removed, broadcasting: 3
I0429 13:56:43.930046       7 log.go:172] (0xc003c79340) (0xc001b24dc0) Stream removed, broadcasting: 5
Apr 29 13:56:43.930: INFO: Found all expected endpoints: [netserver-0]
Apr 29 13:56:43.933: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.103:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8313 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 13:56:43.933: INFO: >>> kubeConfig: /root/.kube/config
I0429 13:56:43.964803       7 log.go:172] (0xc003c79a20) (0xc001b25400) Create stream
I0429 13:56:43.964829       7 log.go:172] (0xc003c79a20) (0xc001b25400) Stream added, broadcasting: 1
I0429 13:56:43.967207       7 log.go:172] (0xc003c79a20) Reply frame received for 1
I0429 13:56:43.967252       7 log.go:172] (0xc003c79a20) (0xc001e61d60) Create stream
I0429 13:56:43.967270       7 log.go:172] (0xc003c79a20) (0xc001e61d60) Stream added, broadcasting: 3
I0429 13:56:43.968078       7 log.go:172] (0xc003c79a20) Reply frame received for 3
I0429 13:56:43.968119       7 log.go:172] (0xc003c79a20) (0xc0015bc0a0) Create stream
I0429 13:56:43.968133       7 log.go:172] (0xc003c79a20) (0xc0015bc0a0) Stream added, broadcasting: 5
I0429 13:56:43.969024       7 log.go:172] (0xc003c79a20) Reply frame received for 5
I0429 13:56:44.038465       7 log.go:172] (0xc003c79a20) Data frame received for 5
I0429 13:56:44.038536       7 log.go:172] (0xc0015bc0a0) (5) Data frame handling
I0429 13:56:44.038572       7 log.go:172] (0xc003c79a20) Data frame received for 3
I0429 13:56:44.038594       7 log.go:172] (0xc001e61d60) (3) Data frame handling
I0429 13:56:44.038607       7 log.go:172] (0xc001e61d60) (3) Data frame sent
I0429 13:56:44.038618       7 log.go:172] (0xc003c79a20) Data frame received for 3
I0429 13:56:44.038629       7 log.go:172] (0xc001e61d60) (3) Data frame handling
I0429 13:56:44.039757       7 log.go:172] (0xc003c79a20) Data frame received for 1
I0429 13:56:44.039795       7 log.go:172] (0xc001b25400) (1) Data frame handling
I0429 13:56:44.039810       7 log.go:172] (0xc001b25400) (1) Data frame sent
I0429 13:56:44.039862       7 log.go:172] (0xc003c79a20) (0xc001b25400) Stream removed, broadcasting: 1
I0429 13:56:44.039883       7 log.go:172] (0xc003c79a20) Go away received
I0429 13:56:44.040122       7 log.go:172] (0xc003c79a20) (0xc001b25400) Stream removed, broadcasting: 1
I0429 13:56:44.040137       7 log.go:172] (0xc003c79a20) (0xc001e61d60) Stream removed, broadcasting: 3
I0429 13:56:44.040145       7 log.go:172] (0xc003c79a20) (0xc0015bc0a0) Stream removed, broadcasting: 5
Apr 29 13:56:44.040: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:44.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8313" for this suite.

• [SLOW TEST:24.567 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":146,"skipped":2265,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:44.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-6e24690e-b2d7-4b56-8d37-fbf0405cbdc7
STEP: Creating a pod to test consume secrets
Apr 29 13:56:44.163: INFO: Waiting up to 5m0s for pod "pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b" in namespace "secrets-5601" to be "Succeeded or Failed"
Apr 29 13:56:44.180: INFO: Pod "pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.404059ms
Apr 29 13:56:46.210: INFO: Pod "pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04782525s
Apr 29 13:56:48.215: INFO: Pod "pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051986987s
STEP: Saw pod success
Apr 29 13:56:48.215: INFO: Pod "pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b" satisfied condition "Succeeded or Failed"
Apr 29 13:56:48.218: INFO: Trying to get logs from node kali-worker pod pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b container secret-volume-test: 
STEP: delete the pod
Apr 29 13:56:48.255: INFO: Waiting for pod pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b to disappear
Apr 29 13:56:48.269: INFO: Pod pod-secrets-8fa27201-1919-4e44-89de-c4235c050f6b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:56:48.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5601" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":290,"completed":147,"skipped":2288,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:56:48.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-7580
Apr 29 13:56:54.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Apr 29 13:56:54.614: INFO: stderr: "I0429 13:56:54.530934    2487 log.go:172] (0xc000a4c370) (0xc000526320) Create stream\nI0429 13:56:54.531008    2487 log.go:172] (0xc000a4c370) (0xc000526320) Stream added, broadcasting: 1\nI0429 13:56:54.538215    2487 log.go:172] (0xc000a4c370) Reply frame received for 1\nI0429 13:56:54.538253    2487 log.go:172] (0xc000a4c370) (0xc000509a40) Create stream\nI0429 13:56:54.538266    2487 log.go:172] (0xc000a4c370) (0xc000509a40) Stream added, broadcasting: 3\nI0429 13:56:54.539268    2487 log.go:172] (0xc000a4c370) Reply frame received for 3\nI0429 13:56:54.539307    2487 log.go:172] (0xc000a4c370) (0xc00046ea00) Create stream\nI0429 13:56:54.539317    2487 log.go:172] (0xc000a4c370) (0xc00046ea00) Stream added, broadcasting: 5\nI0429 13:56:54.539969    2487 log.go:172] (0xc000a4c370) Reply frame received for 5\nI0429 13:56:54.601324    2487 log.go:172] (0xc000a4c370) Data frame received for 5\nI0429 13:56:54.601379    2487 log.go:172] (0xc00046ea00) (5) Data frame handling\nI0429 13:56:54.601413    2487 log.go:172] (0xc00046ea00) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0429 13:56:54.607865    2487 log.go:172] (0xc000a4c370) Data frame received for 3\nI0429 13:56:54.607887    2487 log.go:172] (0xc000509a40) (3) Data frame handling\nI0429 13:56:54.607911    2487 log.go:172] (0xc000509a40) (3) Data frame sent\nI0429 13:56:54.608059    2487 log.go:172] (0xc000a4c370) Data frame received for 5\nI0429 13:56:54.608082    2487 log.go:172] (0xc00046ea00) (5) Data frame handling\nI0429 13:56:54.608206    2487 log.go:172] (0xc000a4c370) Data frame received for 3\nI0429 13:56:54.608222    2487 log.go:172] (0xc000509a40) (3) Data frame handling\nI0429 13:56:54.610177    2487 log.go:172] (0xc000a4c370) Data frame received for 1\nI0429 13:56:54.610206    2487 log.go:172] (0xc000526320) (1) Data frame handling\nI0429 13:56:54.610240    2487 log.go:172] (0xc000526320) (1) Data frame sent\nI0429 13:56:54.610259    2487 log.go:172] (0xc000a4c370) (0xc000526320) Stream removed, broadcasting: 1\nI0429 13:56:54.610282    2487 log.go:172] (0xc000a4c370) Go away received\nI0429 13:56:54.610600    2487 log.go:172] (0xc000a4c370) (0xc000526320) Stream removed, broadcasting: 1\nI0429 13:56:54.610616    2487 log.go:172] (0xc000a4c370) (0xc000509a40) Stream removed, broadcasting: 3\nI0429 13:56:54.610624    2487 log.go:172] (0xc000a4c370) (0xc00046ea00) Stream removed, broadcasting: 5\n"
Apr 29 13:56:54.614: INFO: stdout: "iptables"
Apr 29 13:56:54.614: INFO: proxyMode: iptables
Apr 29 13:56:54.619: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:56:54.642: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:56:56.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:56:56.646: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:56:58.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:56:58.647: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:57:00.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:57:00.646: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:57:02.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:57:02.646: INFO: Pod kube-proxy-mode-detector still exists
Apr 29 13:57:04.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Apr 29 13:57:04.646: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating service affinity-nodeport-timeout in namespace services-7580
STEP: creating replication controller affinity-nodeport-timeout in namespace services-7580
I0429 13:57:04.743500       7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7580, replica count: 3
I0429 13:57:07.794019       7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 13:57:10.794386       7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 13:57:10.803: INFO: Creating new exec pod
Apr 29 13:57:15.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80'
Apr 29 13:57:16.829: INFO: stderr: "I0429 13:57:16.758417    2507 log.go:172] (0xc00003bef0) (0xc000232640) Create stream\nI0429 13:57:16.758494    2507 log.go:172] (0xc00003bef0) (0xc000232640) Stream added, broadcasting: 1\nI0429 13:57:16.760642    2507 log.go:172] (0xc00003bef0) Reply frame received for 1\nI0429 13:57:16.760685    2507 log.go:172] (0xc00003bef0) (0xc000233a40) Create stream\nI0429 13:57:16.760704    2507 log.go:172] (0xc00003bef0) (0xc000233a40) Stream added, broadcasting: 3\nI0429 13:57:16.761576    2507 log.go:172] (0xc00003bef0) Reply frame received for 3\nI0429 13:57:16.761621    2507 log.go:172] (0xc00003bef0) (0xc000208320) Create stream\nI0429 13:57:16.761639    2507 log.go:172] (0xc00003bef0) (0xc000208320) Stream added, broadcasting: 5\nI0429 13:57:16.762296    2507 log.go:172] (0xc00003bef0) Reply frame received for 5\nI0429 13:57:16.823319    2507 log.go:172] (0xc00003bef0) Data frame received for 5\nI0429 13:57:16.823349    2507 log.go:172] (0xc000208320) (5) Data frame handling\nI0429 13:57:16.823404    2507 log.go:172] (0xc000208320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0429 13:57:16.823778    2507 log.go:172] (0xc00003bef0) Data frame received for 5\nI0429 13:57:16.823809    2507 log.go:172] (0xc000208320) (5) Data frame handling\nI0429 13:57:16.823829    2507 log.go:172] (0xc000208320) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0429 13:57:16.823889    2507 log.go:172] (0xc00003bef0) Data frame received for 5\nI0429 13:57:16.823907    2507 log.go:172] (0xc000208320) (5) Data frame handling\nI0429 13:57:16.824094    2507 log.go:172] (0xc00003bef0) Data frame received for 3\nI0429 13:57:16.824109    2507 log.go:172] (0xc000233a40) (3) Data frame handling\nI0429 13:57:16.825099    2507 log.go:172] (0xc00003bef0) Data frame received for 1\nI0429 13:57:16.825270    2507 log.go:172] (0xc000232640) (1) Data frame handling\nI0429 13:57:16.825284    2507 log.go:172] (0xc000232640) (1) Data frame sent\nI0429 13:57:16.825300    2507 log.go:172] (0xc00003bef0) (0xc000232640) Stream removed, broadcasting: 1\nI0429 13:57:16.825452    2507 log.go:172] (0xc00003bef0) Go away received\nI0429 13:57:16.825593    2507 log.go:172] (0xc00003bef0) (0xc000232640) Stream removed, broadcasting: 1\nI0429 13:57:16.825607    2507 log.go:172] (0xc00003bef0) (0xc000233a40) Stream removed, broadcasting: 3\nI0429 13:57:16.825613    2507 log.go:172] (0xc00003bef0) (0xc000208320) Stream removed, broadcasting: 5\n"
Apr 29 13:57:16.829: INFO: stdout: ""
Apr 29 13:57:16.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c nc -zv -t -w 2 10.106.244.216 80'
Apr 29 13:57:17.244: INFO: stderr: "I0429 13:57:17.169274    2528 log.go:172] (0xc000b13970) (0xc000b38640) Create stream\nI0429 13:57:17.169325    2528 log.go:172] (0xc000b13970) (0xc000b38640) Stream added, broadcasting: 1\nI0429 13:57:17.173861    2528 log.go:172] (0xc000b13970) Reply frame received for 1\nI0429 13:57:17.173902    2528 log.go:172] (0xc000b13970) (0xc0005ce640) Create stream\nI0429 13:57:17.173916    2528 log.go:172] (0xc000b13970) (0xc0005ce640) Stream added, broadcasting: 3\nI0429 13:57:17.174823    2528 log.go:172] (0xc000b13970) Reply frame received for 3\nI0429 13:57:17.174860    2528 log.go:172] (0xc000b13970) (0xc0005ceb40) Create stream\nI0429 13:57:17.174874    2528 log.go:172] (0xc000b13970) (0xc0005ceb40) Stream added, broadcasting: 5\nI0429 13:57:17.175637    2528 log.go:172] (0xc000b13970) Reply frame received for 5\nI0429 13:57:17.237769    2528 log.go:172] (0xc000b13970) Data frame received for 3\nI0429 13:57:17.237898    2528 log.go:172] (0xc0005ce640) (3) Data frame handling\nI0429 13:57:17.237981    2528 log.go:172] (0xc000b13970) Data frame received for 5\nI0429 13:57:17.238020    2528 log.go:172] (0xc0005ceb40) (5) Data frame handling\nI0429 13:57:17.238048    2528 log.go:172] (0xc0005ceb40) (5) Data frame sent\nI0429 13:57:17.238065    2528 log.go:172] (0xc000b13970) Data frame received for 5\nI0429 13:57:17.238084    2528 log.go:172] (0xc0005ceb40) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.244.216 80\nConnection to 10.106.244.216 80 port [tcp/http] succeeded!\nI0429 13:57:17.239244    2528 log.go:172] (0xc000b13970) Data frame received for 1\nI0429 13:57:17.239262    2528 log.go:172] (0xc000b38640) (1) Data frame handling\nI0429 13:57:17.239282    2528 log.go:172] (0xc000b38640) (1) Data frame sent\nI0429 13:57:17.239301    2528 log.go:172] (0xc000b13970) (0xc000b38640) Stream removed, broadcasting: 1\nI0429 13:57:17.239441    2528 log.go:172] (0xc000b13970) Go away received\nI0429 13:57:17.239657    2528 log.go:172] (0xc000b13970) (0xc000b38640) Stream removed, broadcasting: 1\nI0429 13:57:17.239675    2528 log.go:172] (0xc000b13970) (0xc0005ce640) Stream removed, broadcasting: 3\nI0429 13:57:17.239684    2528 log.go:172] (0xc000b13970) (0xc0005ceb40) Stream removed, broadcasting: 5\n"
Apr 29 13:57:17.244: INFO: stdout: ""
Apr 29 13:57:17.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30296'
Apr 29 13:57:17.472: INFO: stderr: "I0429 13:57:17.359881    2548 log.go:172] (0xc000a3f290) (0xc000671d60) Create stream\nI0429 13:57:17.360021    2548 log.go:172] (0xc000a3f290) (0xc000671d60) Stream added, broadcasting: 1\nI0429 13:57:17.362704    2548 log.go:172] (0xc000a3f290) Reply frame received for 1\nI0429 13:57:17.362734    2548 log.go:172] (0xc000a3f290) (0xc00069e780) Create stream\nI0429 13:57:17.362740    2548 log.go:172] (0xc000a3f290) (0xc00069e780) Stream added, broadcasting: 3\nI0429 13:57:17.363552    2548 log.go:172] (0xc000a3f290) Reply frame received for 3\nI0429 13:57:17.363591    2548 log.go:172] (0xc000a3f290) (0xc0006fa0a0) Create stream\nI0429 13:57:17.363605    2548 log.go:172] (0xc000a3f290) (0xc0006fa0a0) Stream added, broadcasting: 5\nI0429 13:57:17.364484    2548 log.go:172] (0xc000a3f290) Reply frame received for 5\nI0429 13:57:17.464538    2548 log.go:172] (0xc000a3f290) Data frame received for 5\nI0429 13:57:17.464579    2548 log.go:172] (0xc0006fa0a0) (5) Data frame handling\nI0429 13:57:17.464614    2548 log.go:172] (0xc0006fa0a0) (5) Data frame sent\nI0429 13:57:17.464630    2548 log.go:172] (0xc000a3f290) Data frame received for 5\nI0429 13:57:17.464639    2548 log.go:172] (0xc0006fa0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30296\nConnection to 172.17.0.15 30296 port [tcp/30296] succeeded!\nI0429 13:57:17.464662    2548 log.go:172] (0xc0006fa0a0) (5) Data frame sent\nI0429 13:57:17.464747    2548 log.go:172] (0xc000a3f290) Data frame received for 3\nI0429 13:57:17.464760    2548 log.go:172] (0xc00069e780) (3) Data frame handling\nI0429 13:57:17.464939    2548 log.go:172] (0xc000a3f290) Data frame received for 5\nI0429 13:57:17.464958    2548 log.go:172] (0xc0006fa0a0) (5) Data frame handling\nI0429 13:57:17.466752    2548 log.go:172] (0xc000a3f290) Data frame received for 1\nI0429 13:57:17.466782    2548 log.go:172] (0xc000671d60) (1) Data frame handling\nI0429 13:57:17.466811    2548 log.go:172] (0xc000671d60) (1) Data frame sent\nI0429 13:57:17.466868    2548 log.go:172] (0xc000a3f290) (0xc000671d60) Stream removed, broadcasting: 1\nI0429 13:57:17.466897    2548 log.go:172] (0xc000a3f290) Go away received\nI0429 13:57:17.467268    2548 log.go:172] (0xc000a3f290) (0xc000671d60) Stream removed, broadcasting: 1\nI0429 13:57:17.467290    2548 log.go:172] (0xc000a3f290) (0xc00069e780) Stream removed, broadcasting: 3\nI0429 13:57:17.467300    2548 log.go:172] (0xc000a3f290) (0xc0006fa0a0) Stream removed, broadcasting: 5\n"
Apr 29 13:57:17.472: INFO: stdout: ""
Apr 29 13:57:17.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30296'
Apr 29 13:57:17.665: INFO: stderr: "I0429 13:57:17.590571    2569 log.go:172] (0xc000b494a0) (0xc000b1a460) Create stream\nI0429 13:57:17.590614    2569 log.go:172] (0xc000b494a0) (0xc000b1a460) Stream added, broadcasting: 1\nI0429 13:57:17.595270    2569 log.go:172] (0xc000b494a0) Reply frame received for 1\nI0429 13:57:17.595311    2569 log.go:172] (0xc000b494a0) (0xc0003ca1e0) Create stream\nI0429 13:57:17.595320    2569 log.go:172] (0xc000b494a0) (0xc0003ca1e0) Stream added, broadcasting: 3\nI0429 13:57:17.596428    2569 log.go:172] (0xc000b494a0) Reply frame received for 3\nI0429 13:57:17.596451    2569 log.go:172] (0xc000b494a0) (0xc00039edc0) Create stream\nI0429 13:57:17.596459    2569 log.go:172] (0xc000b494a0) (0xc00039edc0) Stream added, broadcasting: 5\nI0429 13:57:17.597481    2569 log.go:172] (0xc000b494a0) Reply frame received for 5\nI0429 13:57:17.656763    2569 log.go:172] (0xc000b494a0) Data frame received for 5\nI0429 13:57:17.656809    2569 log.go:172] (0xc00039edc0) (5) Data frame handling\nI0429 13:57:17.656832    2569 log.go:172] (0xc00039edc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 30296\nI0429 13:57:17.657046    2569 log.go:172] (0xc000b494a0) Data frame received for 5\nI0429 13:57:17.657092    2569 log.go:172] (0xc00039edc0) (5) Data frame handling\nI0429 13:57:17.657383    2569 log.go:172] (0xc00039edc0) (5) Data frame sent\nConnection to 172.17.0.18 30296 port [tcp/30296] succeeded!\nI0429 13:57:17.657415    2569 log.go:172] (0xc000b494a0) Data frame received for 5\nI0429 13:57:17.657468    2569 log.go:172] (0xc00039edc0) (5) Data frame handling\nI0429 13:57:17.657928    2569 log.go:172] (0xc000b494a0) Data frame received for 3\nI0429 13:57:17.657957    2569 log.go:172] (0xc0003ca1e0) (3) Data frame handling\nI0429 13:57:17.659592    2569 log.go:172] (0xc000b494a0) Data frame received for 1\nI0429 13:57:17.659624    2569 log.go:172] (0xc000b1a460) (1) Data frame handling\nI0429 13:57:17.659647    2569 log.go:172] (0xc000b1a460) (1) Data frame sent\nI0429 13:57:17.659675    2569 log.go:172] (0xc000b494a0) (0xc000b1a460) Stream removed, broadcasting: 1\nI0429 13:57:17.659709    2569 log.go:172] (0xc000b494a0) Go away received\nI0429 13:57:17.660121    2569 log.go:172] (0xc000b494a0) (0xc000b1a460) Stream removed, broadcasting: 1\nI0429 13:57:17.660151    2569 log.go:172] (0xc000b494a0) (0xc0003ca1e0) Stream removed, broadcasting: 3\nI0429 13:57:17.660165    2569 log.go:172] (0xc000b494a0) (0xc00039edc0) Stream removed, broadcasting: 5\n"
Apr 29 13:57:17.665: INFO: stdout: ""
Apr 29 13:57:17.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.15:30296/ ; done'
Apr 29 13:57:17.930: INFO: stderr: "I0429 13:57:17.778672    2589 log.go:172] (0xc0009f0f20) (0xc000994320) Create stream\nI0429 13:57:17.778713    2589 log.go:172] (0xc0009f0f20) (0xc000994320) Stream added, broadcasting: 1\nI0429 13:57:17.782035    2589 log.go:172] (0xc0009f0f20) Reply frame received for 1\nI0429 13:57:17.782091    2589 log.go:172] (0xc0009f0f20) (0xc000700640) Create stream\nI0429 13:57:17.782107    2589 log.go:172] (0xc0009f0f20) (0xc000700640) Stream added, broadcasting: 3\nI0429 13:57:17.782907    2589 log.go:172] (0xc0009f0f20) Reply frame received for 3\nI0429 13:57:17.782941    2589 log.go:172] (0xc0009f0f20) (0xc00053adc0) Create stream\nI0429 13:57:17.782956    2589 log.go:172] (0xc0009f0f20) (0xc00053adc0) Stream added, broadcasting: 5\nI0429 13:57:17.783604    2589 log.go:172] (0xc0009f0f20) Reply frame received for 5\nI0429 13:57:17.839777    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.839806    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.839825    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ seq 0 15\nI0429 13:57:17.840144    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.840173    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.840184    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\nI0429 13:57:17.840203    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.840214    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.840224    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.840240    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.840249    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.840258    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.844985    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.845009    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.845032    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.845576    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.845590    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.845602    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.845644    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.845668    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.845699    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.848586    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.848599    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.848620    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.850000    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.850015    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.850033    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.850044    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.850051    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.850057    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.853947    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.853966    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.853980    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.854410    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.854431    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.854446    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.854460    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.854475    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.854492    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.858121    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.858139    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.858155    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.858382    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.858400    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.858417    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curlI0429 13:57:17.858435    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.858444    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.858455    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.858468    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.858476    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.858482    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.862073    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.862089    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.862104    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.862426    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.862435    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.862441    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.862454    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.862473    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.862491    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\nI0429 13:57:17.862504    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.862515    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.862540    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\nI0429 13:57:17.867973    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.868002    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.868032    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.868444    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.868458    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.868467    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.868493    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.868514    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.868540    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.871759    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.871780    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.871791    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.872714    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.872742    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.872772    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.872793    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.872805    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.872817    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.878755    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.878795    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.878828    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.879013    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.879033    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.879055    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\nI0429 13:57:17.879066    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\n+ echo\n+ I0429 13:57:17.879080    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.879105    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.879122    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.879144    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.879168    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.883263    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.883277    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.883286    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.883699    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.883712    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.883718    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.883729    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.883734    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.883741    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.888774    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.888795    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.888816    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.889341    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.889373    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.889392    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.889412    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.889435    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.889467    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.894478    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.894503    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.894528    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.895297    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.895313    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.895322    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.895338    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.895358    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.895382    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.900492    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.900528    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.900560    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.901050    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.901066    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.901075    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.901096    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.901302    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.901331    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.906053    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.906082    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.906098    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.906431    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.906457    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.906477    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.906503    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.906520    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.906540    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.912897    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.912922    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.912945    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.913696    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.913732    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.913748    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.913766    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.913776    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.913798    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.918222    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.918240    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.918250    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.918965    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.918989    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.919010    2589 log.go:172] (0xc00053adc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:17.919232    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.919242    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.919255    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.924662    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.924677    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.924696    2589 log.go:172] (0xc000700640) (3) Data frame sent\nI0429 13:57:17.925513    2589 log.go:172] (0xc0009f0f20) Data frame received for 3\nI0429 13:57:17.925526    2589 log.go:172] (0xc000700640) (3) Data frame handling\nI0429 13:57:17.926056    2589 log.go:172] (0xc0009f0f20) Data frame received for 5\nI0429 13:57:17.926072    2589 log.go:172] (0xc00053adc0) (5) Data frame handling\nI0429 13:57:17.927060    2589 log.go:172] (0xc0009f0f20) Data frame received for 1\nI0429 13:57:17.927093    2589 log.go:172] (0xc000994320) (1) Data frame handling\nI0429 13:57:17.927136    2589 log.go:172] (0xc000994320) (1) Data frame sent\nI0429 13:57:17.927159    2589 log.go:172] (0xc0009f0f20) (0xc000994320) Stream removed, broadcasting: 1\nI0429 13:57:17.927180    2589 log.go:172] (0xc0009f0f20) Go away received\nI0429 13:57:17.927479    2589 log.go:172] (0xc0009f0f20) (0xc000994320) Stream removed, broadcasting: 1\nI0429 13:57:17.927492    2589 log.go:172] (0xc0009f0f20) (0xc000700640) Stream removed, broadcasting: 3\nI0429 13:57:17.927498    2589 log.go:172] (0xc0009f0f20) (0xc00053adc0) Stream removed, broadcasting: 5\n"
Apr 29 13:57:17.931: INFO: stdout: "\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql\naffinity-nodeport-timeout-sg7ql"
Apr 29 13:57:17.931: INFO: Received response from host: 
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Received response from host: affinity-nodeport-timeout-sg7ql
Apr 29 13:57:17.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.15:30296/'
Apr 29 13:57:18.126: INFO: stderr: "I0429 13:57:18.046591    2611 log.go:172] (0xc000997760) (0xc000b6c6e0) Create stream\nI0429 13:57:18.046672    2611 log.go:172] (0xc000997760) (0xc000b6c6e0) Stream added, broadcasting: 1\nI0429 13:57:18.050093    2611 log.go:172] (0xc000997760) Reply frame received for 1\nI0429 13:57:18.050124    2611 log.go:172] (0xc000997760) (0xc0004a6280) Create stream\nI0429 13:57:18.050132    2611 log.go:172] (0xc000997760) (0xc0004a6280) Stream added, broadcasting: 3\nI0429 13:57:18.051075    2611 log.go:172] (0xc000997760) Reply frame received for 3\nI0429 13:57:18.051111    2611 log.go:172] (0xc000997760) (0xc0003ecdc0) Create stream\nI0429 13:57:18.051128    2611 log.go:172] (0xc000997760) (0xc0003ecdc0) Stream added, broadcasting: 5\nI0429 13:57:18.052016    2611 log.go:172] (0xc000997760) Reply frame received for 5\nI0429 13:57:18.116422    2611 log.go:172] (0xc000997760) Data frame received for 5\nI0429 13:57:18.116462    2611 log.go:172] (0xc0003ecdc0) (5) Data frame handling\nI0429 13:57:18.116484    2611 log.go:172] (0xc0003ecdc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:18.118975    2611 log.go:172] (0xc000997760) Data frame received for 3\nI0429 13:57:18.119005    2611 log.go:172] (0xc0004a6280) (3) Data frame handling\nI0429 13:57:18.119019    2611 log.go:172] (0xc0004a6280) (3) Data frame sent\nI0429 13:57:18.119443    2611 log.go:172] (0xc000997760) Data frame received for 3\nI0429 13:57:18.119459    2611 log.go:172] (0xc0004a6280) (3) Data frame handling\nI0429 13:57:18.119544    2611 log.go:172] (0xc000997760) Data frame received for 5\nI0429 13:57:18.119557    2611 log.go:172] (0xc0003ecdc0) (5) Data frame handling\nI0429 13:57:18.121317    2611 log.go:172] (0xc000997760) Data frame received for 1\nI0429 13:57:18.121340    2611 log.go:172] (0xc000b6c6e0) (1) Data frame handling\nI0429 13:57:18.121359    2611 log.go:172] (0xc000b6c6e0) (1) Data frame sent\nI0429 13:57:18.121373    2611 log.go:172] (0xc000997760) (0xc000b6c6e0) Stream removed, broadcasting: 1\nI0429 13:57:18.121391    2611 log.go:172] (0xc000997760) Go away received\nI0429 13:57:18.121744    2611 log.go:172] (0xc000997760) (0xc000b6c6e0) Stream removed, broadcasting: 1\nI0429 13:57:18.121762    2611 log.go:172] (0xc000997760) (0xc0004a6280) Stream removed, broadcasting: 3\nI0429 13:57:18.121771    2611 log.go:172] (0xc000997760) (0xc0003ecdc0) Stream removed, broadcasting: 5\n"
Apr 29 13:57:18.126: INFO: stdout: "affinity-nodeport-timeout-sg7ql"
Apr 29 13:57:33.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7580 execpod-affinityjdq75 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.15:30296/'
Apr 29 13:57:36.399: INFO: stderr: "I0429 13:57:36.296177    2631 log.go:172] (0xc00082b6b0) (0xc000a98280) Create stream\nI0429 13:57:36.296246    2631 log.go:172] (0xc00082b6b0) (0xc000a98280) Stream added, broadcasting: 1\nI0429 13:57:36.301960    2631 log.go:172] (0xc00082b6b0) Reply frame received for 1\nI0429 13:57:36.301992    2631 log.go:172] (0xc00082b6b0) (0xc000816280) Create stream\nI0429 13:57:36.302000    2631 log.go:172] (0xc00082b6b0) (0xc000816280) Stream added, broadcasting: 3\nI0429 13:57:36.302928    2631 log.go:172] (0xc00082b6b0) Reply frame received for 3\nI0429 13:57:36.302966    2631 log.go:172] (0xc00082b6b0) (0xc0007961e0) Create stream\nI0429 13:57:36.302978    2631 log.go:172] (0xc00082b6b0) (0xc0007961e0) Stream added, broadcasting: 5\nI0429 13:57:36.304018    2631 log.go:172] (0xc00082b6b0) Reply frame received for 5\nI0429 13:57:36.389537    2631 log.go:172] (0xc00082b6b0) Data frame received for 5\nI0429 13:57:36.389577    2631 log.go:172] (0xc0007961e0) (5) Data frame handling\nI0429 13:57:36.389608    2631 log.go:172] (0xc0007961e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:30296/\nI0429 13:57:36.392143    2631 log.go:172] (0xc00082b6b0) Data frame received for 3\nI0429 13:57:36.392161    2631 log.go:172] (0xc000816280) (3) Data frame handling\nI0429 13:57:36.392179    2631 log.go:172] (0xc000816280) (3) Data frame sent\nI0429 13:57:36.392800    2631 log.go:172] (0xc00082b6b0) Data frame received for 3\nI0429 13:57:36.392824    2631 log.go:172] (0xc000816280) (3) Data frame handling\nI0429 13:57:36.392841    2631 log.go:172] (0xc00082b6b0) Data frame received for 5\nI0429 13:57:36.392852    2631 log.go:172] (0xc0007961e0) (5) Data frame handling\nI0429 13:57:36.394859    2631 log.go:172] (0xc00082b6b0) Data frame received for 1\nI0429 13:57:36.394891    2631 log.go:172] (0xc000a98280) (1) Data frame handling\nI0429 13:57:36.394947    2631 log.go:172] (0xc000a98280) (1) Data frame sent\nI0429 13:57:36.395058    2631 log.go:172] (0xc00082b6b0) (0xc000a98280) Stream removed, broadcasting: 1\nI0429 13:57:36.395183    2631 log.go:172] (0xc00082b6b0) Go away received\nI0429 13:57:36.395677    2631 log.go:172] (0xc00082b6b0) (0xc000a98280) Stream removed, broadcasting: 1\nI0429 13:57:36.395702    2631 log.go:172] (0xc00082b6b0) (0xc000816280) Stream removed, broadcasting: 3\nI0429 13:57:36.395714    2631 log.go:172] (0xc00082b6b0) (0xc0007961e0) Stream removed, broadcasting: 5\n"
Apr 29 13:57:36.399: INFO: stdout: "affinity-nodeport-timeout-m49c8"
Apr 29 13:57:36.400: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7580, will wait for the garbage collector to delete the pods
Apr 29 13:57:37.297: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 258.180736ms
Apr 29 13:57:37.997: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 700.266273ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:57:55.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7580" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:66.910 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":148,"skipped":2292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:57:55.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Apr 29 13:57:55.483: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:57:55.489: INFO: Number of nodes with available pods: 0
Apr 29 13:57:55.489: INFO: Node kali-worker is running more than one daemon pod
Apr 29 13:57:56.528: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:57:56.532: INFO: Number of nodes with available pods: 0
Apr 29 13:57:56.532: INFO: Node kali-worker is running more than one daemon pod
Apr 29 13:57:57.494: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:57:57.498: INFO: Number of nodes with available pods: 0
Apr 29 13:57:57.498: INFO: Node kali-worker is running more than one daemon pod
Apr 29 13:57:58.494: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:57:58.497: INFO: Number of nodes with available pods: 0
Apr 29 13:57:58.497: INFO: Node kali-worker is running more than one daemon pod
Apr 29 13:57:59.559: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:57:59.563: INFO: Number of nodes with available pods: 0
Apr 29 13:57:59.563: INFO: Node kali-worker is running more than one daemon pod
Apr 29 13:58:00.528: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:00.550: INFO: Number of nodes with available pods: 2
Apr 29 13:58:00.550: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Apr 29 13:58:00.665: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:00.697: INFO: Number of nodes with available pods: 1
Apr 29 13:58:00.697: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:01.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:01.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:01.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:02.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:02.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:02.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:03.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:03.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:03.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:04.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:04.705: INFO: Number of nodes with available pods: 1
Apr 29 13:58:04.705: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:05.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:05.708: INFO: Number of nodes with available pods: 1
Apr 29 13:58:05.708: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:06.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:06.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:06.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:07.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:07.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:07.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:08.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:08.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:08.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:09.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:09.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:09.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:10.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:10.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:10.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:11.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:11.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:11.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:12.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:12.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:12.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:13.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:13.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:13.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:14.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:14.706: INFO: Number of nodes with available pods: 1
Apr 29 13:58:14.706: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:15.960: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:15.964: INFO: Number of nodes with available pods: 1
Apr 29 13:58:15.964: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:16.738: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:16.743: INFO: Number of nodes with available pods: 1
Apr 29 13:58:16.743: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:17.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:17.707: INFO: Number of nodes with available pods: 1
Apr 29 13:58:17.707: INFO: Node kali-worker2 is running more than one daemon pod
Apr 29 13:58:18.703: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 13:58:18.707: INFO: Number of nodes with available pods: 2
Apr 29 13:58:18.707: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9627, will wait for the garbage collector to delete the pods
Apr 29 13:58:18.768: INFO: Deleting DaemonSet.extensions daemon-set took: 5.451941ms
Apr 29 13:58:19.068: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.289401ms
Apr 29 13:58:33.475: INFO: Number of nodes with available pods: 0
Apr 29 13:58:33.475: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 13:58:33.477: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9627/daemonsets","resourceVersion":"73477"},"items":null}

Apr 29 13:58:33.479: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9627/pods","resourceVersion":"73477"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:58:33.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9627" for this suite.

• [SLOW TEST:38.306 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":290,"completed":149,"skipped":2323,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:58:33.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: getting the auto-created API token
Apr 29 13:58:34.274: INFO: created pod pod-service-account-defaultsa
Apr 29 13:58:34.274: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Apr 29 13:58:34.290: INFO: created pod pod-service-account-mountsa
Apr 29 13:58:34.290: INFO: pod pod-service-account-mountsa service account token volume mount: true
Apr 29 13:58:34.387: INFO: created pod pod-service-account-nomountsa
Apr 29 13:58:34.387: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Apr 29 13:58:34.447: INFO: created pod pod-service-account-defaultsa-mountspec
Apr 29 13:58:34.447: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Apr 29 13:58:34.512: INFO: created pod pod-service-account-mountsa-mountspec
Apr 29 13:58:34.512: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Apr 29 13:58:34.573: INFO: created pod pod-service-account-nomountsa-mountspec
Apr 29 13:58:34.574: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Apr 29 13:58:34.661: INFO: created pod pod-service-account-defaultsa-nomountspec
Apr 29 13:58:34.661: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Apr 29 13:58:34.697: INFO: created pod pod-service-account-mountsa-nomountspec
Apr 29 13:58:34.697: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Apr 29 13:58:34.736: INFO: created pod pod-service-account-nomountsa-nomountspec
Apr 29 13:58:34.736: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:58:34.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9466" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":290,"completed":150,"skipped":2334,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:58:34.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Apr 29 13:58:35.169: INFO: Waiting up to 5m0s for pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062" in namespace "downward-api-3407" to be "Succeeded or Failed"
Apr 29 13:58:35.302: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 132.849059ms
Apr 29 13:58:37.311: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141899564s
Apr 29 13:58:39.745: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575156361s
Apr 29 13:58:42.255: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 7.085506402s
Apr 29 13:58:44.799: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 9.629321386s
Apr 29 13:58:47.073: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 11.903945346s
Apr 29 13:58:49.367: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Pending", Reason="", readiness=false. Elapsed: 14.197735295s
Apr 29 13:58:51.450: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Running", Reason="", readiness=true. Elapsed: 16.280878796s
Apr 29 13:58:53.722: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Running", Reason="", readiness=true. Elapsed: 18.552115251s
Apr 29 13:58:55.726: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.556476809s
STEP: Saw pod success
Apr 29 13:58:55.726: INFO: Pod "downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062" satisfied condition "Succeeded or Failed"
Apr 29 13:58:55.755: INFO: Trying to get logs from node kali-worker2 pod downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062 container dapi-container: 
STEP: delete the pod
Apr 29 13:58:56.075: INFO: Waiting for pod downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062 to disappear
Apr 29 13:58:56.150: INFO: Pod downward-api-5c9b74ae-3a44-4e49-acff-5fd0bf074062 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:58:56.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3407" for this suite.

• [SLOW TEST:21.261 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":290,"completed":151,"skipped":2347,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:58:56.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Apr 29 13:58:56.343: INFO: Waiting up to 5m0s for pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0" in namespace "emptydir-4551" to be "Succeeded or Failed"
Apr 29 13:58:56.397: INFO: Pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0": Phase="Pending", Reason="", readiness=false. Elapsed: 53.357248ms
Apr 29 13:58:58.678: INFO: Pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334889087s
Apr 29 13:59:00.683: INFO: Pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340223727s
Apr 29 13:59:02.688: INFO: Pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.34473078s
STEP: Saw pod success
Apr 29 13:59:02.688: INFO: Pod "pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0" satisfied condition "Succeeded or Failed"
Apr 29 13:59:02.691: INFO: Trying to get logs from node kali-worker pod pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0 container test-container: 
STEP: delete the pod
Apr 29 13:59:02.723: INFO: Waiting for pod pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0 to disappear
Apr 29 13:59:02.728: INFO: Pod pod-04c8a1c4-4f10-4b58-8f12-c0f71f40aee0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:02.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4551" for this suite.

• [SLOW TEST:6.479 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":152,"skipped":2381,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:02.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 13:59:06.884: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:06.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8667" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":290,"completed":153,"skipped":2397,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:06.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 13:59:07.618: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 13:59:09.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765547, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765547, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765547, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765547, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 13:59:12.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:12.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8257" for this suite.
STEP: Destroying namespace "webhook-8257-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.761 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":290,"completed":154,"skipped":2399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:13.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 13:59:14.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3" in namespace "projected-9682" to be "Succeeded or Failed"
Apr 29 13:59:14.208: INFO: Pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 168.919194ms
Apr 29 13:59:16.217: INFO: Pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178271389s
Apr 29 13:59:18.223: INFO: Pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183647666s
Apr 29 13:59:20.227: INFO: Pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188436277s
STEP: Saw pod success
Apr 29 13:59:20.227: INFO: Pod "downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3" satisfied condition "Succeeded or Failed"
Apr 29 13:59:20.230: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3 container client-container: 
STEP: delete the pod
Apr 29 13:59:20.268: INFO: Waiting for pod downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3 to disappear
Apr 29 13:59:20.273: INFO: Pod downwardapi-volume-31e67dab-29a3-4068-b503-d99d16a2a8b3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:20.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9682" for this suite.

• [SLOW TEST:6.585 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":290,"completed":155,"skipped":2426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:20.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0429 13:59:21.421382       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 13:59:21.421: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:21.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2246" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":290,"completed":156,"skipped":2460,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:21.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Apr 29 13:59:31.799: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 13:59:31.807: INFO: Pod pod-with-prestop-http-hook still exists
Apr 29 13:59:33.808: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 13:59:33.813: INFO: Pod pod-with-prestop-http-hook still exists
Apr 29 13:59:35.808: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 13:59:35.811: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:35.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8120" for this suite.

• [SLOW TEST:14.396 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":290,"completed":157,"skipped":2460,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:35.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-100d3b7e-05d9-4868-9d4b-dd1b81d79f08
STEP: Creating a pod to test consume configMaps
Apr 29 13:59:35.954: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d" in namespace "projected-3225" to be "Succeeded or Failed"
Apr 29 13:59:35.957: INFO: Pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422014ms
Apr 29 13:59:38.127: INFO: Pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173660332s
Apr 29 13:59:40.217: INFO: Pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d": Phase="Running", Reason="", readiness=true. Elapsed: 4.263873631s
Apr 29 13:59:42.221: INFO: Pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267768284s
STEP: Saw pod success
Apr 29 13:59:42.221: INFO: Pod "pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d" satisfied condition "Succeeded or Failed"
Apr 29 13:59:42.224: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 13:59:42.242: INFO: Waiting for pod pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d to disappear
Apr 29 13:59:42.258: INFO: Pod pod-projected-configmaps-821b25a1-d8e9-42b5-9d7e-5e43dc1cc17d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 13:59:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3225" for this suite.

• [SLOW TEST:6.442 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":290,"completed":158,"skipped":2463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 13:59:42.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Apr 29 13:59:42.366: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74057 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 13:59:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 13:59:42.366: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74057 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 13:59:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Apr 29 13:59:52.375: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74102 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 13:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 13:59:52.375: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74102 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 13:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Apr 29 14:00:02.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74132 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:00:02.382: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74132 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Apr 29 14:00:12.389: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74161 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:00:12.389: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-a 4bebe857-caea-4b40-9b84-0a1a361b2a62 74161 0 2020-04-29 13:59:42 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Apr 29 14:00:22.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-b f042be17-17b6-4447-8930-65a97175cb77 74191 0 2020-04-29 14:00:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:00:22.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-b f042be17-17b6-4447-8930-65a97175cb77 74191 0 2020-04-29 14:00:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Apr 29 14:00:32.409: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-b f042be17-17b6-4447-8930-65a97175cb77 74226 0 2020-04-29 14:00:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:00:32.410: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2520 /api/v1/namespaces/watch-2520/configmaps/e2e-watch-test-configmap-b f042be17-17b6-4447-8930-65a97175cb77 74226 0 2020-04-29 14:00:22 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-04-29 14:00:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:00:42.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2520" for this suite.

• [SLOW TEST:60.215 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":290,"completed":159,"skipped":2488,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:00:42.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0429 14:00:54.656254       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 14:00:54.656: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:00:54.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1788" for this suite.

• [SLOW TEST:12.191 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":290,"completed":160,"skipped":2499,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:00:54.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating all guestbook components
Apr 29 14:00:54.712: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Apr 29 14:00:54.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:55.149: INFO: stderr: ""
Apr 29 14:00:55.149: INFO: stdout: "service/agnhost-slave created\n"
Apr 29 14:00:55.150: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Apr 29 14:00:55.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:55.452: INFO: stderr: ""
Apr 29 14:00:55.452: INFO: stdout: "service/agnhost-master created\n"
Apr 29 14:00:55.452: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Apr 29 14:00:55.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:56.617: INFO: stderr: ""
Apr 29 14:00:56.617: INFO: stdout: "service/frontend created\n"
Apr 29 14:00:56.617: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Apr 29 14:00:56.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:56.951: INFO: stderr: ""
Apr 29 14:00:56.951: INFO: stdout: "deployment.apps/frontend created\n"
Apr 29 14:00:56.952: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Apr 29 14:00:56.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:57.298: INFO: stderr: ""
Apr 29 14:00:57.298: INFO: stdout: "deployment.apps/agnhost-master created\n"
Apr 29 14:00:57.298: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Apr 29 14:00:57.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5089'
Apr 29 14:00:58.047: INFO: stderr: ""
Apr 29 14:00:58.047: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Apr 29 14:00:58.047: INFO: Waiting for all frontend pods to be Running.
Apr 29 14:01:08.098: INFO: Waiting for frontend to serve content.
Apr 29 14:01:08.108: INFO: Trying to add a new entry to the guestbook.
Apr 29 14:01:08.118: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Apr 29 14:01:08.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:08.686: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:08.686: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Apr 29 14:01:08.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:09.132: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:09.132: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Apr 29 14:01:09.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:09.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:09.303: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Apr 29 14:01:09.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:09.551: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:09.551: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Apr 29 14:01:09.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:09.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:09.863: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Apr 29 14:01:09.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5089'
Apr 29 14:01:10.510: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:01:10.510: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:10.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5089" for this suite.

• [SLOW TEST:16.004 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":290,"completed":161,"skipped":2502,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:10.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:01:11.259: INFO: Waiting up to 5m0s for pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59" in namespace "projected-5318" to be "Succeeded or Failed"
Apr 29 14:01:11.446: INFO: Pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59": Phase="Pending", Reason="", readiness=false. Elapsed: 186.704907ms
Apr 29 14:01:13.601: INFO: Pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342295268s
Apr 29 14:01:15.606: INFO: Pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346512899s
Apr 29 14:01:17.619: INFO: Pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360411904s
STEP: Saw pod success
Apr 29 14:01:17.620: INFO: Pod "downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59" satisfied condition "Succeeded or Failed"
Apr 29 14:01:17.622: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59 container client-container: 
STEP: delete the pod
Apr 29 14:01:17.647: INFO: Waiting for pod downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59 to disappear
Apr 29 14:01:17.696: INFO: Pod downwardapi-volume-391e40d8-fc4e-4686-9a61-33daaa752d59 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:17.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5318" for this suite.

• [SLOW TEST:7.030 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":290,"completed":162,"skipped":2524,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:17.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Apr 29 14:01:17.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74761 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:01:17.925: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74762 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:01:17.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74763 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Apr 29 14:01:27.952: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74814 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:01:27.953: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74815 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:01:27.953: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8597 /api/v1/namespaces/watch-8597/configmaps/e2e-watch-test-label-changed 24e0cd1c-5d46-430d-b85b-b7f95deb1d9e 74816 0 2020-04-29 14:01:17 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-04-29 14:01:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:27.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8597" for this suite.

• [SLOW TEST:10.254 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":290,"completed":163,"skipped":2525,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:27.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:01:28.059: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Apr 29 14:01:30.193: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:31.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7354" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":290,"completed":164,"skipped":2539,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:31.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5655
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-5655
I0429 14:01:32.106135       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5655, replica count: 2
I0429 14:01:35.156587       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:01:38.156846       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 14:01:38.156: INFO: Creating new exec pod
Apr 29 14:01:43.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodwrhbn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Apr 29 14:01:43.421: INFO: stderr: "I0429 14:01:43.322714    2900 log.go:172] (0xc000a718c0) (0xc00085f5e0) Create stream\nI0429 14:01:43.322784    2900 log.go:172] (0xc000a718c0) (0xc00085f5e0) Stream added, broadcasting: 1\nI0429 14:01:43.325694    2900 log.go:172] (0xc000a718c0) Reply frame received for 1\nI0429 14:01:43.325739    2900 log.go:172] (0xc000a718c0) (0xc000532500) Create stream\nI0429 14:01:43.325748    2900 log.go:172] (0xc000a718c0) (0xc000532500) Stream added, broadcasting: 3\nI0429 14:01:43.326635    2900 log.go:172] (0xc000a718c0) Reply frame received for 3\nI0429 14:01:43.326683    2900 log.go:172] (0xc000a718c0) (0xc000866000) Create stream\nI0429 14:01:43.326698    2900 log.go:172] (0xc000a718c0) (0xc000866000) Stream added, broadcasting: 5\nI0429 14:01:43.327518    2900 log.go:172] (0xc000a718c0) Reply frame received for 5\nI0429 14:01:43.409695    2900 log.go:172] (0xc000a718c0) Data frame received for 5\nI0429 14:01:43.409730    2900 log.go:172] (0xc000866000) (5) Data frame handling\nI0429 14:01:43.409755    2900 log.go:172] (0xc000866000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0429 14:01:43.411838    2900 log.go:172] (0xc000a718c0) Data frame received for 5\nI0429 14:01:43.411903    2900 log.go:172] (0xc000866000) (5) Data frame handling\nI0429 14:01:43.411945    2900 log.go:172] (0xc000866000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0429 14:01:43.412595    2900 log.go:172] (0xc000a718c0) Data frame received for 3\nI0429 14:01:43.412629    2900 log.go:172] (0xc000a718c0) Data frame received for 5\nI0429 14:01:43.412676    2900 log.go:172] (0xc000866000) (5) Data frame handling\nI0429 14:01:43.412717    2900 log.go:172] (0xc000532500) (3) Data frame handling\nI0429 14:01:43.414788    2900 log.go:172] (0xc000a718c0) Data frame received for 1\nI0429 14:01:43.414814    2900 log.go:172] (0xc00085f5e0) (1) Data frame handling\nI0429 14:01:43.414840    2900 log.go:172] (0xc00085f5e0) (1) Data frame sent\nI0429 14:01:43.414940    2900 log.go:172] (0xc000a718c0) (0xc00085f5e0) Stream removed, broadcasting: 1\nI0429 14:01:43.414979    2900 log.go:172] (0xc000a718c0) Go away received\nI0429 14:01:43.415326    2900 log.go:172] (0xc000a718c0) (0xc00085f5e0) Stream removed, broadcasting: 1\nI0429 14:01:43.415345    2900 log.go:172] (0xc000a718c0) (0xc000532500) Stream removed, broadcasting: 3\nI0429 14:01:43.415356    2900 log.go:172] (0xc000a718c0) (0xc000866000) Stream removed, broadcasting: 5\n"
Apr 29 14:01:43.421: INFO: stdout: ""
Apr 29 14:01:43.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5655 execpodwrhbn -- /bin/sh -x -c nc -zv -t -w 2 10.111.24.142 80'
Apr 29 14:01:43.612: INFO: stderr: "I0429 14:01:43.552814    2920 log.go:172] (0xc000ae31e0) (0xc000a7e460) Create stream\nI0429 14:01:43.552854    2920 log.go:172] (0xc000ae31e0) (0xc000a7e460) Stream added, broadcasting: 1\nI0429 14:01:43.556852    2920 log.go:172] (0xc000ae31e0) Reply frame received for 1\nI0429 14:01:43.556905    2920 log.go:172] (0xc000ae31e0) (0xc00067a460) Create stream\nI0429 14:01:43.556927    2920 log.go:172] (0xc000ae31e0) (0xc00067a460) Stream added, broadcasting: 3\nI0429 14:01:43.558161    2920 log.go:172] (0xc000ae31e0) Reply frame received for 3\nI0429 14:01:43.558196    2920 log.go:172] (0xc000ae31e0) (0xc0005f0140) Create stream\nI0429 14:01:43.558204    2920 log.go:172] (0xc000ae31e0) (0xc0005f0140) Stream added, broadcasting: 5\nI0429 14:01:43.559584    2920 log.go:172] (0xc000ae31e0) Reply frame received for 5\nI0429 14:01:43.606085    2920 log.go:172] (0xc000ae31e0) Data frame received for 5\nI0429 14:01:43.606130    2920 log.go:172] (0xc0005f0140) (5) Data frame handling\nI0429 14:01:43.606145    2920 log.go:172] (0xc0005f0140) (5) Data frame sent\nI0429 14:01:43.606155    2920 log.go:172] (0xc000ae31e0) Data frame received for 5\nI0429 14:01:43.606165    2920 log.go:172] (0xc0005f0140) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.24.142 80\nConnection to 10.111.24.142 80 port [tcp/http] succeeded!\nI0429 14:01:43.606196    2920 log.go:172] (0xc000ae31e0) Data frame received for 3\nI0429 14:01:43.606208    2920 log.go:172] (0xc00067a460) (3) Data frame handling\nI0429 14:01:43.607517    2920 log.go:172] (0xc000ae31e0) Data frame received for 1\nI0429 14:01:43.607549    2920 log.go:172] (0xc000a7e460) (1) Data frame handling\nI0429 14:01:43.607568    2920 log.go:172] (0xc000a7e460) (1) Data frame sent\nI0429 14:01:43.607588    2920 log.go:172] (0xc000ae31e0) (0xc000a7e460) Stream removed, broadcasting: 1\nI0429 14:01:43.607624    2920 log.go:172] (0xc000ae31e0) Go away received\nI0429 14:01:43.608036    2920 log.go:172] (0xc000ae31e0) (0xc000a7e460) Stream removed, broadcasting: 1\nI0429 14:01:43.608067    2920 log.go:172] (0xc000ae31e0) (0xc00067a460) Stream removed, broadcasting: 3\nI0429 14:01:43.608082    2920 log.go:172] (0xc000ae31e0) (0xc0005f0140) Stream removed, broadcasting: 5\n"
Apr 29 14:01:43.612: INFO: stdout: ""
Apr 29 14:01:43.612: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:43.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5655" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:12.401 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":290,"completed":165,"skipped":2540,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:43.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:47.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4970" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":290,"completed":166,"skipped":2544,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:47.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:01:48.345: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:01:50.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765708, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765708, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765708, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765708, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:01:53.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:01:54.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9662" for this suite.
STEP: Destroying namespace "webhook-9662-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.209 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":290,"completed":167,"skipped":2557,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:01:55.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
Apr 29 14:01:55.148: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:03.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9875" for this suite.

• [SLOW TEST:8.649 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":290,"completed":168,"skipped":2571,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:03.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Apr 29 14:02:04.080: INFO: Waiting up to 5m0s for pod "pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f" in namespace "emptydir-1971" to be "Succeeded or Failed"
Apr 29 14:02:04.084: INFO: Pod "pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.993198ms
Apr 29 14:02:06.141: INFO: Pod "pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060322291s
Apr 29 14:02:08.164: INFO: Pod "pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084004208s
STEP: Saw pod success
Apr 29 14:02:08.164: INFO: Pod "pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f" satisfied condition "Succeeded or Failed"
Apr 29 14:02:08.167: INFO: Trying to get logs from node kali-worker pod pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f container test-container: 
STEP: delete the pod
Apr 29 14:02:08.224: INFO: Waiting for pod pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f to disappear
Apr 29 14:02:08.229: INFO: Pod pod-4e8cc396-4e4c-4dc1-94de-04bcbba10e0f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1971" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":169,"skipped":2578,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:08.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-692084bf-8cea-4273-b753-4504afe0c384
STEP: Creating a pod to test consume configMaps
Apr 29 14:02:08.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364" in namespace "configmap-6716" to be "Succeeded or Failed"
Apr 29 14:02:08.590: INFO: Pod "pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364": Phase="Pending", Reason="", readiness=false. Elapsed: 52.98366ms
Apr 29 14:02:10.594: INFO: Pod "pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057529829s
Apr 29 14:02:12.599: INFO: Pod "pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062321701s
STEP: Saw pod success
Apr 29 14:02:12.599: INFO: Pod "pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364" satisfied condition "Succeeded or Failed"
Apr 29 14:02:12.602: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364 container configmap-volume-test: 
STEP: delete the pod
Apr 29 14:02:12.638: INFO: Waiting for pod pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364 to disappear
Apr 29 14:02:12.648: INFO: Pod pod-configmaps-4772cc4d-f38f-43a8-9fd1-96d2610d1364 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:12.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6716" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":290,"completed":170,"skipped":2604,"failed":0}
SS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:12.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-4405
STEP: creating service affinity-clusterip in namespace services-4405
STEP: creating replication controller affinity-clusterip in namespace services-4405
I0429 14:02:12.763031       7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4405, replica count: 3
I0429 14:02:15.813463       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:02:18.813728       7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 14:02:18.820: INFO: Creating new exec pod
Apr 29 14:02:23.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4405 execpod-affinityxkrqh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80'
Apr 29 14:02:24.069: INFO: stderr: "I0429 14:02:23.987804    2940 log.go:172] (0xc000980000) (0xc00044cc80) Create stream\nI0429 14:02:23.987892    2940 log.go:172] (0xc000980000) (0xc00044cc80) Stream added, broadcasting: 1\nI0429 14:02:23.990535    2940 log.go:172] (0xc000980000) Reply frame received for 1\nI0429 14:02:23.990572    2940 log.go:172] (0xc000980000) (0xc0001806e0) Create stream\nI0429 14:02:23.990581    2940 log.go:172] (0xc000980000) (0xc0001806e0) Stream added, broadcasting: 3\nI0429 14:02:23.991300    2940 log.go:172] (0xc000980000) Reply frame received for 3\nI0429 14:02:23.991342    2940 log.go:172] (0xc000980000) (0xc0005081e0) Create stream\nI0429 14:02:23.991357    2940 log.go:172] (0xc000980000) (0xc0005081e0) Stream added, broadcasting: 5\nI0429 14:02:23.992356    2940 log.go:172] (0xc000980000) Reply frame received for 5\nI0429 14:02:24.062421    2940 log.go:172] (0xc000980000) Data frame received for 5\nI0429 14:02:24.062477    2940 log.go:172] (0xc0005081e0) (5) Data frame handling\nI0429 14:02:24.062516    2940 log.go:172] (0xc0005081e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0429 14:02:24.063102    2940 log.go:172] (0xc000980000) Data frame received for 5\nI0429 14:02:24.063143    2940 log.go:172] (0xc0005081e0) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0429 14:02:24.063430    2940 log.go:172] (0xc0005081e0) (5) Data frame sent\nI0429 14:02:24.063505    2940 log.go:172] (0xc000980000) Data frame received for 5\nI0429 14:02:24.063525    2940 log.go:172] (0xc0005081e0) (5) Data frame handling\nI0429 14:02:24.063690    2940 log.go:172] (0xc000980000) Data frame received for 3\nI0429 14:02:24.063719    2940 log.go:172] (0xc0001806e0) (3) Data frame handling\nI0429 14:02:24.065460    2940 log.go:172] (0xc000980000) Data frame received for 1\nI0429 14:02:24.065490    2940 log.go:172] (0xc00044cc80) (1) Data frame handling\nI0429 14:02:24.065514    2940 log.go:172] (0xc00044cc80) (1) Data frame sent\nI0429 14:02:24.065531    2940 log.go:172] (0xc000980000) (0xc00044cc80) Stream removed, broadcasting: 1\nI0429 14:02:24.065628    2940 log.go:172] (0xc000980000) Go away received\nI0429 14:02:24.065848    2940 log.go:172] (0xc000980000) (0xc00044cc80) Stream removed, broadcasting: 1\nI0429 14:02:24.065872    2940 log.go:172] (0xc000980000) (0xc0001806e0) Stream removed, broadcasting: 3\nI0429 14:02:24.065890    2940 log.go:172] (0xc000980000) (0xc0005081e0) Stream removed, broadcasting: 5\n"
Apr 29 14:02:24.069: INFO: stdout: ""
Apr 29 14:02:24.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4405 execpod-affinityxkrqh -- /bin/sh -x -c nc -zv -t -w 2 10.105.220.90 80'
Apr 29 14:02:24.286: INFO: stderr: "I0429 14:02:24.206085    2962 log.go:172] (0xc000afb130) (0xc000842e60) Create stream\nI0429 14:02:24.206140    2962 log.go:172] (0xc000afb130) (0xc000842e60) Stream added, broadcasting: 1\nI0429 14:02:24.210261    2962 log.go:172] (0xc000afb130) Reply frame received for 1\nI0429 14:02:24.210300    2962 log.go:172] (0xc000afb130) (0xc00083d4a0) Create stream\nI0429 14:02:24.210312    2962 log.go:172] (0xc000afb130) (0xc00083d4a0) Stream added, broadcasting: 3\nI0429 14:02:24.211335    2962 log.go:172] (0xc000afb130) Reply frame received for 3\nI0429 14:02:24.211385    2962 log.go:172] (0xc000afb130) (0xc0007a81e0) Create stream\nI0429 14:02:24.211402    2962 log.go:172] (0xc000afb130) (0xc0007a81e0) Stream added, broadcasting: 5\nI0429 14:02:24.212401    2962 log.go:172] (0xc000afb130) Reply frame received for 5\nI0429 14:02:24.279798    2962 log.go:172] (0xc000afb130) Data frame received for 5\nI0429 14:02:24.279842    2962 log.go:172] (0xc000afb130) Data frame received for 3\nI0429 14:02:24.279875    2962 log.go:172] (0xc00083d4a0) (3) Data frame handling\nI0429 14:02:24.279902    2962 log.go:172] (0xc0007a81e0) (5) Data frame handling\nI0429 14:02:24.279919    2962 log.go:172] (0xc0007a81e0) (5) Data frame sent\nI0429 14:02:24.279934    2962 log.go:172] (0xc000afb130) Data frame received for 5\nI0429 14:02:24.279955    2962 log.go:172] (0xc0007a81e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.220.90 80\nConnection to 10.105.220.90 80 port [tcp/http] succeeded!\nI0429 14:02:24.281699    2962 log.go:172] (0xc000afb130) Data frame received for 1\nI0429 14:02:24.281734    2962 log.go:172] (0xc000842e60) (1) Data frame handling\nI0429 14:02:24.281752    2962 log.go:172] (0xc000842e60) (1) Data frame sent\nI0429 14:02:24.281775    2962 log.go:172] (0xc000afb130) (0xc000842e60) Stream removed, broadcasting: 1\nI0429 14:02:24.281860    2962 log.go:172] (0xc000afb130) Go away received\nI0429 14:02:24.282297    2962 log.go:172] (0xc000afb130) (0xc000842e60) Stream removed, broadcasting: 1\nI0429 14:02:24.282320    2962 log.go:172] (0xc000afb130) (0xc00083d4a0) Stream removed, broadcasting: 3\nI0429 14:02:24.282336    2962 log.go:172] (0xc000afb130) (0xc0007a81e0) Stream removed, broadcasting: 5\n"
Apr 29 14:02:24.286: INFO: stdout: ""
Apr 29 14:02:24.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4405 execpod-affinityxkrqh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.220.90:80/ ; done'
Apr 29 14:02:24.556: INFO: stderr: "I0429 14:02:24.410712    2984 log.go:172] (0xc000ba5340) (0xc000c081e0) Create stream\nI0429 14:02:24.410770    2984 log.go:172] (0xc000ba5340) (0xc000c081e0) Stream added, broadcasting: 1\nI0429 14:02:24.416468    2984 log.go:172] (0xc000ba5340) Reply frame received for 1\nI0429 14:02:24.416517    2984 log.go:172] (0xc000ba5340) (0xc0006ae500) Create stream\nI0429 14:02:24.416533    2984 log.go:172] (0xc000ba5340) (0xc0006ae500) Stream added, broadcasting: 3\nI0429 14:02:24.418234    2984 log.go:172] (0xc000ba5340) Reply frame received for 3\nI0429 14:02:24.418267    2984 log.go:172] (0xc000ba5340) (0xc000652460) Create stream\nI0429 14:02:24.418275    2984 log.go:172] (0xc000ba5340) (0xc000652460) Stream added, broadcasting: 5\nI0429 14:02:24.419900    2984 log.go:172] (0xc000ba5340) Reply frame received for 5\nI0429 14:02:24.469803    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.469841    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.469851    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.469866    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.469873    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.469881    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.472901    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.472926    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.472946    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.473278    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.473290    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.473296    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.473304    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.473308    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.473313    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.481254    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.481267    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.481272    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.481819    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.481839    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.481856    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.481924    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.481948    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.481969    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.485271    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.485293    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.485319    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.485713    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.485746    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.485767    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.485785    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.485804    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.485824    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.489912    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.489949    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.489991    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.490569    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.490599    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.490614    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.490643    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.490671    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.490700    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.494374    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.494403    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.494435    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.494931    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.494964    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.494981    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.495003    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.495014    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.495038    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.500033    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.500052    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.500069    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.500571    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.500583    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.500588    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.500606    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.500620    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.500636    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.505500    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.505524    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.505545    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.506154    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.506180    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.506192    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.506208    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.506216    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.506232    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.510383    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.510415    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.510439    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.510828    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.510861    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.510879    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.510907    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.510930    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.510951    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.513979    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.514001    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.514014    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.514047    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.514066    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.514080    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.514085    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.514089    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.514112    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.518575    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.518606    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.518620    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.519012    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.519117    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.519146    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.519165    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.519178    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.519197    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.523474    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.523488    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.523495    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.524441    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.524475    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.524492    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.524520    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.524553    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.524600    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.528645    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.528676    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.528705    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.529057    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.529068    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.529074    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.529096    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.529326    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.529355    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.533289    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.533300    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.533305    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.533904    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.533918    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.533928    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.533944    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.533968    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.533982    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.538249    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.538270    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.538296    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.538593    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.538617    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.538627    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.538642    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.538656    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.538669    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.543209    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.543238    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.543259    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.543607    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.543630    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.543649    2984 log.go:172] (0xc000652460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.220.90:80/\nI0429 14:02:24.543668    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.543693    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.543714    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.547602    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.547625    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.547646    2984 log.go:172] (0xc0006ae500) (3) Data frame sent\nI0429 14:02:24.548509    2984 log.go:172] (0xc000ba5340) Data frame received for 5\nI0429 14:02:24.548537    2984 log.go:172] (0xc000652460) (5) Data frame handling\nI0429 14:02:24.548564    2984 log.go:172] (0xc000ba5340) Data frame received for 3\nI0429 14:02:24.548589    2984 log.go:172] (0xc0006ae500) (3) Data frame handling\nI0429 14:02:24.550900    2984 log.go:172] (0xc000ba5340) Data frame received for 1\nI0429 14:02:24.550927    2984 log.go:172] (0xc000c081e0) (1) Data frame handling\nI0429 14:02:24.550941    2984 log.go:172] (0xc000c081e0) (1) Data frame sent\nI0429 14:02:24.551011    2984 log.go:172] (0xc000ba5340) (0xc000c081e0) Stream removed, broadcasting: 1\nI0429 14:02:24.551139    2984 log.go:172] (0xc000ba5340) Go away received\nI0429 14:02:24.551404    2984 log.go:172] (0xc000ba5340) (0xc000c081e0) Stream removed, broadcasting: 1\nI0429 14:02:24.551426    2984 log.go:172] (0xc000ba5340) (0xc0006ae500) Stream removed, broadcasting: 3\nI0429 14:02:24.551434    2984 log.go:172] (0xc000ba5340) (0xc000652460) Stream removed, broadcasting: 5\n"
Apr 29 14:02:24.556: INFO: stdout: "\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs\naffinity-clusterip-g94hs"
Apr 29 14:02:24.556: INFO: Received response from host: 
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Received response from host: affinity-clusterip-g94hs
Apr 29 14:02:24.556: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-clusterip in namespace services-4405, will wait for the garbage collector to delete the pods
Apr 29 14:02:24.676: INFO: Deleting ReplicationController affinity-clusterip took: 27.207238ms
Apr 29 14:02:25.076: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.272449ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:33.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4405" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:21.183 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":290,"completed":171,"skipped":2606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:33.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Apr 29 14:02:33.897: INFO: Waiting up to 5m0s for pod "pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f" in namespace "emptydir-477" to be "Succeeded or Failed"
Apr 29 14:02:33.943: INFO: Pod "pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.202552ms
Apr 29 14:02:35.948: INFO: Pod "pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050811414s
Apr 29 14:02:37.951: INFO: Pod "pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054131148s
STEP: Saw pod success
Apr 29 14:02:37.951: INFO: Pod "pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f" satisfied condition "Succeeded or Failed"
Apr 29 14:02:37.953: INFO: Trying to get logs from node kali-worker pod pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f container test-container: 
STEP: delete the pod
Apr 29 14:02:38.014: INFO: Waiting for pod pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f to disappear
Apr 29 14:02:38.032: INFO: Pod pod-c3f8dddf-5b3d-4f51-af1f-8a197d73bc5f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:38.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-477" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":172,"skipped":2637,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:38.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-7a7f161c-fedd-444e-80ee-a6ce997f0666
STEP: Creating a pod to test consume configMaps
Apr 29 14:02:38.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d" in namespace "configmap-4078" to be "Succeeded or Failed"
Apr 29 14:02:38.182: INFO: Pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.479621ms
Apr 29 14:02:40.186: INFO: Pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021136064s
Apr 29 14:02:42.225: INFO: Pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059916997s
Apr 29 14:02:44.232: INFO: Pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066331805s
STEP: Saw pod success
Apr 29 14:02:44.232: INFO: Pod "pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d" satisfied condition "Succeeded or Failed"
Apr 29 14:02:44.236: INFO: Trying to get logs from node kali-worker pod pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d container configmap-volume-test: 
STEP: delete the pod
Apr 29 14:02:44.249: INFO: Waiting for pod pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d to disappear
Apr 29 14:02:44.254: INFO: Pod pod-configmaps-eed0234e-7790-4ec1-a2aa-8303cf703f6d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4078" for this suite.

• [SLOW TEST:6.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":173,"skipped":2680,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:44.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Apr 29 14:02:44.348: INFO: Waiting up to 5m0s for pod "downward-api-a961dd76-3517-4139-904b-42885f9caea4" in namespace "downward-api-1149" to be "Succeeded or Failed"
Apr 29 14:02:44.351: INFO: Pod "downward-api-a961dd76-3517-4139-904b-42885f9caea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.669997ms
Apr 29 14:02:46.355: INFO: Pod "downward-api-a961dd76-3517-4139-904b-42885f9caea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006740385s
Apr 29 14:02:48.358: INFO: Pod "downward-api-a961dd76-3517-4139-904b-42885f9caea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010553652s
STEP: Saw pod success
Apr 29 14:02:48.358: INFO: Pod "downward-api-a961dd76-3517-4139-904b-42885f9caea4" satisfied condition "Succeeded or Failed"
Apr 29 14:02:48.361: INFO: Trying to get logs from node kali-worker2 pod downward-api-a961dd76-3517-4139-904b-42885f9caea4 container dapi-container: 
STEP: delete the pod
Apr 29 14:02:48.440: INFO: Waiting for pod downward-api-a961dd76-3517-4139-904b-42885f9caea4 to disappear
Apr 29 14:02:48.446: INFO: Pod downward-api-a961dd76-3517-4139-904b-42885f9caea4 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:02:48.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1149" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":290,"completed":174,"skipped":2681,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:02:48.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:02:49.027: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:02:52.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765768, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:02:54.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765769, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765768, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:02:57.423: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Apr 29 14:03:01.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-9071 to-be-attached-pod -i -c=container1'
Apr 29 14:03:01.932: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:03:01.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9071" for this suite.
STEP: Destroying namespace "webhook-9071-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.616 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":290,"completed":175,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:03:02.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:03:02.172: INFO: Creating deployment "test-recreate-deployment"
Apr 29 14:03:02.177: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Apr 29 14:03:02.199: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Apr 29 14:03:05.061: INFO: Waiting deployment "test-recreate-deployment" to complete
Apr 29 14:03:05.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765782, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765782, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765782, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723765782, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:03:07.331: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Apr 29 14:03:07.448: INFO: Updating deployment test-recreate-deployment
Apr 29 14:03:07.448: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
Apr 29 14:03:08.082: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9650 /apis/apps/v1/namespaces/deployment-9650/deployments/test-recreate-deployment 39e86cad-cb32-478d-9a41-2cd76f68d0f3 75719 2 2020-04-29 14:03:02 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032460c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-29 14:03:07 +0000 UTC,LastTransitionTime:2020-04-29 14:03:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-04-29 14:03:07 +0000 UTC,LastTransitionTime:2020-04-29 14:03:02 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Apr 29 14:03:08.087: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-9650 /apis/apps/v1/namespaces/deployment-9650/replicasets/test-recreate-deployment-d5667d9c7 2b073912-93c6-4750-a797-4294fba682c4 75718 1 2020-04-29 14:03:07 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 39e86cad-cb32-478d-9a41-2cd76f68d0f3 0xc0032467c0 0xc0032467c1}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39e86cad-cb32-478d-9a41-2cd76f68d0f3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003246838  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:03:08.087: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Apr 29 14:03:08.087: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8  deployment-9650 /apis/apps/v1/namespaces/deployment-9650/replicasets/test-recreate-deployment-6d65b9f6d8 e82c8fc6-0811-4a83-8256-9593c0f282fe 75709 2 2020-04-29 14:03:02 +0000 UTC   map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 39e86cad-cb32-478d-9a41-2cd76f68d0f3 0xc0032466c7 0xc0032466c8}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39e86cad-cb32-478d-9a41-2cd76f68d0f3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003246758  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:03:08.118: INFO: Pod "test-recreate-deployment-d5667d9c7-j9phl" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-j9phl test-recreate-deployment-d5667d9c7- deployment-9650 /api/v1/namespaces/deployment-9650/pods/test-recreate-deployment-d5667d9c7-j9phl c8b9b8c6-f0ef-4295-bcbe-ac6145e34dd4 75721 0 2020-04-29 14:03:07 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 2b073912-93c6-4750-a797-4294fba682c4 0xc003246ee0 0xc003246ee1}] []  [{kube-controller-manager Update v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b073912-93c6-4750-a797-4294fba682c4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:03:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xk52n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xk52n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xk52n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:03:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:03:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:03:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:03:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:03:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:03:08.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9650" for this suite.

• [SLOW TEST:6.066 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":290,"completed":176,"skipped":2704,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:03:08.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-7071/configmap-test-30620b71-d939-46fc-a151-accca63a28f9
STEP: Creating a pod to test consume configMaps
Apr 29 14:03:08.233: INFO: Waiting up to 5m0s for pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482" in namespace "configmap-7071" to be "Succeeded or Failed"
Apr 29 14:03:08.286: INFO: Pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482": Phase="Pending", Reason="", readiness=false. Elapsed: 52.618224ms
Apr 29 14:03:10.316: INFO: Pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082412879s
Apr 29 14:03:12.424: INFO: Pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190868546s
Apr 29 14:03:14.427: INFO: Pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.193947027s
STEP: Saw pod success
Apr 29 14:03:14.427: INFO: Pod "pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482" satisfied condition "Succeeded or Failed"
Apr 29 14:03:14.430: INFO: Trying to get logs from node kali-worker pod pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482 container env-test: 
STEP: delete the pod
Apr 29 14:03:14.481: INFO: Waiting for pod pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482 to disappear
Apr 29 14:03:14.495: INFO: Pod pod-configmaps-103223fa-b12f-40bb-98ba-a3cb81fb3482 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:03:14.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7071" for this suite.

• [SLOW TEST:6.370 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":290,"completed":177,"skipped":2715,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:03:14.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-ed931872-6c2c-4973-911b-e75b3ba941af
STEP: Creating a pod to test consume secrets
Apr 29 14:03:14.796: INFO: Waiting up to 5m0s for pod "pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa" in namespace "secrets-3326" to be "Succeeded or Failed"
Apr 29 14:03:14.843: INFO: Pod "pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa": Phase="Pending", Reason="", readiness=false. Elapsed: 46.469082ms
Apr 29 14:03:16.854: INFO: Pod "pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057692008s
Apr 29 14:03:18.858: INFO: Pod "pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062059394s
STEP: Saw pod success
Apr 29 14:03:18.858: INFO: Pod "pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa" satisfied condition "Succeeded or Failed"
Apr 29 14:03:18.862: INFO: Trying to get logs from node kali-worker pod pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa container secret-env-test: 
STEP: delete the pod
Apr 29 14:03:18.902: INFO: Waiting for pod pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa to disappear
Apr 29 14:03:18.937: INFO: Pod pod-secrets-406ef359-74c5-44d9-88cb-1e13662256fa no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:03:18.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3326" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":290,"completed":178,"skipped":2732,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:03:18.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-xjv5
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 14:03:19.090: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xjv5" in namespace "subpath-628" to be "Succeeded or Failed"
Apr 29 14:03:19.094: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657871ms
Apr 29 14:03:21.099: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008739683s
Apr 29 14:03:23.103: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 4.012860054s
Apr 29 14:03:25.107: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 6.017483909s
Apr 29 14:03:27.112: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 8.021866739s
Apr 29 14:03:29.117: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 10.02665216s
Apr 29 14:03:31.121: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 12.031167403s
Apr 29 14:03:33.125: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 14.035481165s
Apr 29 14:03:35.130: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 16.040249346s
Apr 29 14:03:37.134: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 18.044320696s
Apr 29 14:03:39.171: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 20.081339211s
Apr 29 14:03:41.189: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Running", Reason="", readiness=true. Elapsed: 22.099230644s
Apr 29 14:03:43.193: INFO: Pod "pod-subpath-test-secret-xjv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.103432124s
STEP: Saw pod success
Apr 29 14:03:43.193: INFO: Pod "pod-subpath-test-secret-xjv5" satisfied condition "Succeeded or Failed"
Apr 29 14:03:43.197: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-xjv5 container test-container-subpath-secret-xjv5: 
STEP: delete the pod
Apr 29 14:03:43.258: INFO: Waiting for pod pod-subpath-test-secret-xjv5 to disappear
Apr 29 14:03:43.275: INFO: Pod pod-subpath-test-secret-xjv5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-xjv5
Apr 29 14:03:43.275: INFO: Deleting pod "pod-subpath-test-secret-xjv5" in namespace "subpath-628"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:03:43.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-628" for this suite.

• [SLOW TEST:24.354 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":290,"completed":179,"skipped":2750,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:03:43.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Performing setup for networking test in namespace pod-network-test-4363
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 14:03:43.393: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 29 14:03:43.525: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 14:03:45.620: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 29 14:03:47.999: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:49.529: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:51.530: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:53.528: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:55.529: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:57.529: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 29 14:03:59.528: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 29 14:03:59.533: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 14:04:01.584: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 29 14:04:03.537: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 29 14:04:07.755: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.136 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:04:07.755: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:04:07.781084       7 log.go:172] (0xc004c24370) (0xc001eca140) Create stream
I0429 14:04:07.781291       7 log.go:172] (0xc004c24370) (0xc001eca140) Stream added, broadcasting: 1
I0429 14:04:07.782729       7 log.go:172] (0xc004c24370) Reply frame received for 1
I0429 14:04:07.782777       7 log.go:172] (0xc004c24370) (0xc001eca1e0) Create stream
I0429 14:04:07.782794       7 log.go:172] (0xc004c24370) (0xc001eca1e0) Stream added, broadcasting: 3
I0429 14:04:07.783562       7 log.go:172] (0xc004c24370) Reply frame received for 3
I0429 14:04:07.783594       7 log.go:172] (0xc004c24370) (0xc002ada000) Create stream
I0429 14:04:07.783605       7 log.go:172] (0xc004c24370) (0xc002ada000) Stream added, broadcasting: 5
I0429 14:04:07.784423       7 log.go:172] (0xc004c24370) Reply frame received for 5
I0429 14:04:08.818329       7 log.go:172] (0xc004c24370) Data frame received for 5
I0429 14:04:08.818371       7 log.go:172] (0xc002ada000) (5) Data frame handling
I0429 14:04:08.818402       7 log.go:172] (0xc004c24370) Data frame received for 3
I0429 14:04:08.818448       7 log.go:172] (0xc001eca1e0) (3) Data frame handling
I0429 14:04:08.818475       7 log.go:172] (0xc001eca1e0) (3) Data frame sent
I0429 14:04:08.818490       7 log.go:172] (0xc004c24370) Data frame received for 3
I0429 14:04:08.818500       7 log.go:172] (0xc001eca1e0) (3) Data frame handling
I0429 14:04:08.820062       7 log.go:172] (0xc004c24370) Data frame received for 1
I0429 14:04:08.820086       7 log.go:172] (0xc001eca140) (1) Data frame handling
I0429 14:04:08.820100       7 log.go:172] (0xc001eca140) (1) Data frame sent
I0429 14:04:08.820111       7 log.go:172] (0xc004c24370) (0xc001eca140) Stream removed, broadcasting: 1
I0429 14:04:08.820160       7 log.go:172] (0xc004c24370) Go away received
I0429 14:04:08.820203       7 log.go:172] (0xc004c24370) (0xc001eca140) Stream removed, broadcasting: 1
I0429 14:04:08.820228       7 log.go:172] (0xc004c24370) (0xc001eca1e0) Stream removed, broadcasting: 3
I0429 14:04:08.820258       7 log.go:172] (0xc004c24370) (0xc002ada000) Stream removed, broadcasting: 5
Apr 29 14:04:08.820: INFO: Found all expected endpoints: [netserver-0]
Apr 29 14:04:08.823: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.138 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:04:08.823: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:04:08.848555       7 log.go:172] (0xc0019b8420) (0xc002ada460) Create stream
I0429 14:04:08.848596       7 log.go:172] (0xc0019b8420) (0xc002ada460) Stream added, broadcasting: 1
I0429 14:04:08.850675       7 log.go:172] (0xc0019b8420) Reply frame received for 1
I0429 14:04:08.850737       7 log.go:172] (0xc0019b8420) (0xc00296c000) Create stream
I0429 14:04:08.850759       7 log.go:172] (0xc0019b8420) (0xc00296c000) Stream added, broadcasting: 3
I0429 14:04:08.851790       7 log.go:172] (0xc0019b8420) Reply frame received for 3
I0429 14:04:08.851843       7 log.go:172] (0xc0019b8420) (0xc0017fb400) Create stream
I0429 14:04:08.851866       7 log.go:172] (0xc0019b8420) (0xc0017fb400) Stream added, broadcasting: 5
I0429 14:04:08.853020       7 log.go:172] (0xc0019b8420) Reply frame received for 5
I0429 14:04:09.908832       7 log.go:172] (0xc0019b8420) Data frame received for 3
I0429 14:04:09.908900       7 log.go:172] (0xc00296c000) (3) Data frame handling
I0429 14:04:09.908919       7 log.go:172] (0xc00296c000) (3) Data frame sent
I0429 14:04:09.908931       7 log.go:172] (0xc0019b8420) Data frame received for 3
I0429 14:04:09.908945       7 log.go:172] (0xc00296c000) (3) Data frame handling
I0429 14:04:09.910186       7 log.go:172] (0xc0019b8420) Data frame received for 5
I0429 14:04:09.910224       7 log.go:172] (0xc0017fb400) (5) Data frame handling
I0429 14:04:09.911438       7 log.go:172] (0xc0019b8420) Data frame received for 1
I0429 14:04:09.911473       7 log.go:172] (0xc002ada460) (1) Data frame handling
I0429 14:04:09.911524       7 log.go:172] (0xc002ada460) (1) Data frame sent
I0429 14:04:09.911568       7 log.go:172] (0xc0019b8420) (0xc002ada460) Stream removed, broadcasting: 1
I0429 14:04:09.911623       7 log.go:172] (0xc0019b8420) Go away received
I0429 14:04:09.911791       7 log.go:172] (0xc0019b8420) (0xc002ada460) Stream removed, broadcasting: 1
I0429 14:04:09.911822       7 log.go:172] (0xc0019b8420) (0xc00296c000) Stream removed, broadcasting: 3
I0429 14:04:09.911853       7 log.go:172] (0xc0019b8420) (0xc0017fb400) Stream removed, broadcasting: 5
Apr 29 14:04:09.911: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:04:09.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4363" for this suite.

• [SLOW TEST:27.202 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":180,"skipped":2757,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:04:10.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service in namespace services-4032
STEP: creating service affinity-nodeport-transition in namespace services-4032
STEP: creating replication controller affinity-nodeport-transition in namespace services-4032
I0429 14:04:11.619737       7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4032, replica count: 3
I0429 14:04:14.670249       7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:04:17.670452       7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:04:20.670801       7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 14:04:20.681: INFO: Creating new exec pod
Apr 29 14:04:25.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80'
Apr 29 14:04:25.934: INFO: stderr: "I0429 14:04:25.857986    3027 log.go:172] (0xc000b26000) (0xc00057e320) Create stream\nI0429 14:04:25.858040    3027 log.go:172] (0xc000b26000) (0xc00057e320) Stream added, broadcasting: 1\nI0429 14:04:25.859615    3027 log.go:172] (0xc000b26000) Reply frame received for 1\nI0429 14:04:25.859644    3027 log.go:172] (0xc000b26000) (0xc00057f2c0) Create stream\nI0429 14:04:25.859651    3027 log.go:172] (0xc000b26000) (0xc00057f2c0) Stream added, broadcasting: 3\nI0429 14:04:25.860472    3027 log.go:172] (0xc000b26000) Reply frame received for 3\nI0429 14:04:25.860507    3027 log.go:172] (0xc000b26000) (0xc0004aee60) Create stream\nI0429 14:04:25.860526    3027 log.go:172] (0xc000b26000) (0xc0004aee60) Stream added, broadcasting: 5\nI0429 14:04:25.863383    3027 log.go:172] (0xc000b26000) Reply frame received for 5\nI0429 14:04:25.926661    3027 log.go:172] (0xc000b26000) Data frame received for 3\nI0429 14:04:25.926728    3027 log.go:172] (0xc00057f2c0) (3) Data frame handling\nI0429 14:04:25.926765    3027 log.go:172] (0xc000b26000) Data frame received for 5\nI0429 14:04:25.926785    3027 log.go:172] (0xc0004aee60) (5) Data frame handling\nI0429 14:04:25.926814    3027 log.go:172] (0xc0004aee60) (5) Data frame sent\nI0429 14:04:25.926837    3027 log.go:172] (0xc000b26000) Data frame received for 5\nI0429 14:04:25.926852    3027 log.go:172] (0xc0004aee60) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0429 14:04:25.928515    3027 log.go:172] (0xc000b26000) Data frame received for 1\nI0429 14:04:25.928550    3027 log.go:172] (0xc00057e320) (1) Data frame handling\nI0429 14:04:25.928568    3027 log.go:172] (0xc00057e320) (1) Data frame sent\nI0429 14:04:25.928584    3027 log.go:172] (0xc000b26000) (0xc00057e320) Stream removed, broadcasting: 1\nI0429 14:04:25.928606    3027 log.go:172] (0xc000b26000) Go away received\nI0429 14:04:25.929069    3027 log.go:172] (0xc000b26000) (0xc00057e320) Stream removed, broadcasting: 1\nI0429 14:04:25.929101    3027 log.go:172] (0xc000b26000) (0xc00057f2c0) Stream removed, broadcasting: 3\nI0429 14:04:25.929334    3027 log.go:172] (0xc000b26000) (0xc0004aee60) Stream removed, broadcasting: 5\n"
Apr 29 14:04:25.935: INFO: stdout: ""
Apr 29 14:04:25.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c nc -zv -t -w 2 10.104.174.12 80'
Apr 29 14:04:26.157: INFO: stderr: "I0429 14:04:26.085611    3047 log.go:172] (0xc000b75130) (0xc000b56460) Create stream\nI0429 14:04:26.085696    3047 log.go:172] (0xc000b75130) (0xc000b56460) Stream added, broadcasting: 1\nI0429 14:04:26.089966    3047 log.go:172] (0xc000b75130) Reply frame received for 1\nI0429 14:04:26.090016    3047 log.go:172] (0xc000b75130) (0xc0005c8280) Create stream\nI0429 14:04:26.090045    3047 log.go:172] (0xc000b75130) (0xc0005c8280) Stream added, broadcasting: 3\nI0429 14:04:26.090845    3047 log.go:172] (0xc000b75130) Reply frame received for 3\nI0429 14:04:26.090890    3047 log.go:172] (0xc000b75130) (0xc0005441e0) Create stream\nI0429 14:04:26.090907    3047 log.go:172] (0xc000b75130) (0xc0005441e0) Stream added, broadcasting: 5\nI0429 14:04:26.091823    3047 log.go:172] (0xc000b75130) Reply frame received for 5\nI0429 14:04:26.149346    3047 log.go:172] (0xc000b75130) Data frame received for 5\nI0429 14:04:26.149379    3047 log.go:172] (0xc0005441e0) (5) Data frame handling\nI0429 14:04:26.149394    3047 log.go:172] (0xc0005441e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.174.12 80\nI0429 14:04:26.149792    3047 log.go:172] (0xc000b75130) Data frame received for 5\nI0429 14:04:26.149821    3047 log.go:172] (0xc0005441e0) (5) Data frame handling\nI0429 14:04:26.149841    3047 log.go:172] (0xc0005441e0) (5) Data frame sent\nConnection to 10.104.174.12 80 port [tcp/http] succeeded!\nI0429 14:04:26.150371    3047 log.go:172] (0xc000b75130) Data frame received for 5\nI0429 14:04:26.150406    3047 log.go:172] (0xc0005441e0) (5) Data frame handling\nI0429 14:04:26.150563    3047 log.go:172] (0xc000b75130) Data frame received for 3\nI0429 14:04:26.150583    3047 log.go:172] (0xc0005c8280) (3) Data frame handling\nI0429 14:04:26.152038    3047 log.go:172] (0xc000b75130) Data frame received for 1\nI0429 14:04:26.152063    3047 log.go:172] (0xc000b56460) (1) Data frame handling\nI0429 14:04:26.152076    3047 log.go:172] (0xc000b56460) (1) Data frame sent\nI0429 14:04:26.152090    3047 log.go:172] (0xc000b75130) (0xc000b56460) Stream removed, broadcasting: 1\nI0429 14:04:26.152119    3047 log.go:172] (0xc000b75130) Go away received\nI0429 14:04:26.152691    3047 log.go:172] (0xc000b75130) (0xc000b56460) Stream removed, broadcasting: 1\nI0429 14:04:26.152720    3047 log.go:172] (0xc000b75130) (0xc0005c8280) Stream removed, broadcasting: 3\nI0429 14:04:26.152734    3047 log.go:172] (0xc000b75130) (0xc0005441e0) Stream removed, broadcasting: 5\n"
Apr 29 14:04:26.157: INFO: stdout: ""
Apr 29 14:04:26.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32478'
Apr 29 14:04:26.381: INFO: stderr: "I0429 14:04:26.291629    3068 log.go:172] (0xc000c0e790) (0xc00068e1e0) Create stream\nI0429 14:04:26.291688    3068 log.go:172] (0xc000c0e790) (0xc00068e1e0) Stream added, broadcasting: 1\nI0429 14:04:26.294469    3068 log.go:172] (0xc000c0e790) Reply frame received for 1\nI0429 14:04:26.294522    3068 log.go:172] (0xc000c0e790) (0xc00068e6e0) Create stream\nI0429 14:04:26.294539    3068 log.go:172] (0xc000c0e790) (0xc00068e6e0) Stream added, broadcasting: 3\nI0429 14:04:26.295499    3068 log.go:172] (0xc000c0e790) Reply frame received for 3\nI0429 14:04:26.295541    3068 log.go:172] (0xc000c0e790) (0xc00068edc0) Create stream\nI0429 14:04:26.295557    3068 log.go:172] (0xc000c0e790) (0xc00068edc0) Stream added, broadcasting: 5\nI0429 14:04:26.296661    3068 log.go:172] (0xc000c0e790) Reply frame received for 5\nI0429 14:04:26.374096    3068 log.go:172] (0xc000c0e790) Data frame received for 5\nI0429 14:04:26.374154    3068 log.go:172] (0xc00068edc0) (5) Data frame handling\nI0429 14:04:26.374178    3068 log.go:172] (0xc00068edc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 32478\nConnection to 172.17.0.15 32478 port [tcp/32478] succeeded!\nI0429 14:04:26.374217    3068 log.go:172] (0xc000c0e790) Data frame received for 3\nI0429 14:04:26.374243    3068 log.go:172] (0xc00068e6e0) (3) Data frame handling\nI0429 14:04:26.374422    3068 log.go:172] (0xc000c0e790) Data frame received for 5\nI0429 14:04:26.374447    3068 log.go:172] (0xc00068edc0) (5) Data frame handling\nI0429 14:04:26.375784    3068 log.go:172] (0xc000c0e790) Data frame received for 1\nI0429 14:04:26.375818    3068 log.go:172] (0xc00068e1e0) (1) Data frame handling\nI0429 14:04:26.375847    3068 log.go:172] (0xc00068e1e0) (1) Data frame sent\nI0429 14:04:26.375891    3068 log.go:172] (0xc000c0e790) (0xc00068e1e0) Stream removed, broadcasting: 1\nI0429 14:04:26.376056    3068 log.go:172] (0xc000c0e790) Go away received\nI0429 14:04:26.376357    3068 log.go:172] (0xc000c0e790) (0xc00068e1e0) Stream removed, broadcasting: 1\nI0429 14:04:26.376390    3068 log.go:172] (0xc000c0e790) (0xc00068e6e0) Stream removed, broadcasting: 3\nI0429 14:04:26.376404    3068 log.go:172] (0xc000c0e790) (0xc00068edc0) Stream removed, broadcasting: 5\n"
Apr 29 14:04:26.381: INFO: stdout: ""
Apr 29 14:04:26.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32478'
Apr 29 14:04:26.597: INFO: stderr: "I0429 14:04:26.515376    3090 log.go:172] (0xc0000efb80) (0xc0009da3c0) Create stream\nI0429 14:04:26.515434    3090 log.go:172] (0xc0000efb80) (0xc0009da3c0) Stream added, broadcasting: 1\nI0429 14:04:26.520960    3090 log.go:172] (0xc0000efb80) Reply frame received for 1\nI0429 14:04:26.521001    3090 log.go:172] (0xc0000efb80) (0xc0006cc640) Create stream\nI0429 14:04:26.521016    3090 log.go:172] (0xc0000efb80) (0xc0006cc640) Stream added, broadcasting: 3\nI0429 14:04:26.522044    3090 log.go:172] (0xc0000efb80) Reply frame received for 3\nI0429 14:04:26.522103    3090 log.go:172] (0xc0000efb80) (0xc0005d4320) Create stream\nI0429 14:04:26.522120    3090 log.go:172] (0xc0000efb80) (0xc0005d4320) Stream added, broadcasting: 5\nI0429 14:04:26.522826    3090 log.go:172] (0xc0000efb80) Reply frame received for 5\nI0429 14:04:26.588700    3090 log.go:172] (0xc0000efb80) Data frame received for 5\nI0429 14:04:26.588731    3090 log.go:172] (0xc0005d4320) (5) Data frame handling\nI0429 14:04:26.588766    3090 log.go:172] (0xc0005d4320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 32478\nI0429 14:04:26.590519    3090 log.go:172] (0xc0000efb80) Data frame received for 3\nI0429 14:04:26.590562    3090 log.go:172] (0xc0006cc640) (3) Data frame handling\nI0429 14:04:26.590588    3090 log.go:172] (0xc0000efb80) Data frame received for 5\nI0429 14:04:26.590604    3090 log.go:172] (0xc0005d4320) (5) Data frame handling\nI0429 14:04:26.590632    3090 log.go:172] (0xc0005d4320) (5) Data frame sent\nI0429 14:04:26.590645    3090 log.go:172] (0xc0000efb80) Data frame received for 5\nI0429 14:04:26.590656    3090 log.go:172] (0xc0005d4320) (5) Data frame handling\nConnection to 172.17.0.18 32478 port [tcp/32478] succeeded!\nI0429 14:04:26.592274    3090 log.go:172] (0xc0000efb80) Data frame received for 1\nI0429 14:04:26.592299    3090 log.go:172] (0xc0009da3c0) (1) Data frame handling\nI0429 14:04:26.592319    3090 log.go:172] (0xc0009da3c0) (1) Data frame sent\nI0429 14:04:26.592359    3090 log.go:172] (0xc0000efb80) (0xc0009da3c0) Stream removed, broadcasting: 1\nI0429 14:04:26.592577    3090 log.go:172] (0xc0000efb80) Go away received\nI0429 14:04:26.592821    3090 log.go:172] (0xc0000efb80) (0xc0009da3c0) Stream removed, broadcasting: 1\nI0429 14:04:26.592844    3090 log.go:172] (0xc0000efb80) (0xc0006cc640) Stream removed, broadcasting: 3\nI0429 14:04:26.592858    3090 log.go:172] (0xc0000efb80) (0xc0005d4320) Stream removed, broadcasting: 5\n"
Apr 29 14:04:26.597: INFO: stdout: ""
Apr 29 14:04:26.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.15:32478/ ; done'
Apr 29 14:04:26.918: INFO: stderr: "I0429 14:04:26.748277    3112 log.go:172] (0xc000446000) (0xc000507540) Create stream\nI0429 14:04:26.748342    3112 log.go:172] (0xc000446000) (0xc000507540) Stream added, broadcasting: 1\nI0429 14:04:26.751365    3112 log.go:172] (0xc000446000) Reply frame received for 1\nI0429 14:04:26.751419    3112 log.go:172] (0xc000446000) (0xc0005079a0) Create stream\nI0429 14:04:26.751444    3112 log.go:172] (0xc000446000) (0xc0005079a0) Stream added, broadcasting: 3\nI0429 14:04:26.752362    3112 log.go:172] (0xc000446000) Reply frame received for 3\nI0429 14:04:26.752404    3112 log.go:172] (0xc000446000) (0xc0004fadc0) Create stream\nI0429 14:04:26.752426    3112 log.go:172] (0xc000446000) (0xc0004fadc0) Stream added, broadcasting: 5\nI0429 14:04:26.753711    3112 log.go:172] (0xc000446000) Reply frame received for 5\nI0429 14:04:26.811366    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.811397    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.811416    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ seq 0 15\nI0429 14:04:26.811499    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.811537    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.811563    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.811620    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.811651    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.811676    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\nI0429 14:04:26.811689    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.811700    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.811727    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\nI0429 14:04:26.820079    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.820102    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.820117    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.820762    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.820792    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.820805    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.820823    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.820833    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.820843    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.828232    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.828255    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.828266    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.829057    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.829085    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.829099    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.829277    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.829297    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.829307    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.835388    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.835411    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.835429    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.835980    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.835999    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.836026    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.836058    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.836085    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.836099    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\nI0429 14:04:26.841944    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.841970    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.841986    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.842403    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.842446    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.842476    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.842497    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.842514    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.842540    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.847698    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.847720    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.847749    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.848339    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.848363    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.848373    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.848392    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.848399    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.848408    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.854140    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.854159    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.854182    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.854642    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.854661    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.854677    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.854721    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.854735    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.854742    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.859201    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.859219    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.859232    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.860152    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.860173    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.860187    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.860204    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.860216    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.860230    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.865355    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.865393    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.865416    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.865933    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.865961    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.865972    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.865987    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.865995    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.866004    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.870373    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.870405    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.870436    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.870736    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.870759    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.870772    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.870786    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.870798    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.870811    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.875097    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.875117    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.875129    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.875963    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.875977    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.875984    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.876018    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.876047    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.876067    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.882273    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.882305    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.882512    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.882798    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.882818    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.882830    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.882849    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.882859    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.882871    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.889388    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.889405    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.889421    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.890014    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.890039    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.890059    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.890069    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.890079    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.890085    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.893795    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.893813    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.893822    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.894638    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.894656    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.894664    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.894683    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.894701    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.894714    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.899250    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.899283    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.899307    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.899673    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.899694    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.899705    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.899739    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.899768    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.899793    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:26.905585    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.905620    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.905654    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.906348    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.906383    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.906404    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\nI0429 14:04:26.906427    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.906445    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/I0429 14:04:26.906464    3112 log.go:172] (0xc000446000) Data frame received for 3\n\nI0429 14:04:26.906499    3112 log.go:172] (0xc0004fadc0) (5) Data frame sent\nI0429 14:04:26.906531    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.906550    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.910658    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.910683    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.910704    3112 log.go:172] (0xc0005079a0) (3) Data frame sent\nI0429 14:04:26.911475    3112 log.go:172] (0xc000446000) Data frame received for 3\nI0429 14:04:26.911508    3112 log.go:172] (0xc0005079a0) (3) Data frame handling\nI0429 14:04:26.911852    3112 log.go:172] (0xc000446000) Data frame received for 5\nI0429 14:04:26.911875    3112 log.go:172] (0xc0004fadc0) (5) Data frame handling\nI0429 14:04:26.913616    3112 log.go:172] (0xc000446000) Data frame received for 1\nI0429 14:04:26.913630    3112 log.go:172] (0xc000507540) (1) Data frame handling\nI0429 14:04:26.913651    3112 log.go:172] (0xc000507540) (1) Data frame sent\nI0429 14:04:26.913768    3112 log.go:172] (0xc000446000) (0xc000507540) Stream removed, broadcasting: 1\nI0429 14:04:26.913788    3112 log.go:172] (0xc000446000) Go away received\nI0429 14:04:26.914158    3112 log.go:172] (0xc000446000) (0xc000507540) Stream removed, broadcasting: 1\nI0429 14:04:26.914175    3112 log.go:172] (0xc000446000) (0xc0005079a0) Stream removed, broadcasting: 3\nI0429 14:04:26.914185    3112 log.go:172] (0xc000446000) (0xc0004fadc0) Stream removed, broadcasting: 5\n"
Apr 29 14:04:26.918: INFO: stdout: "\naffinity-nodeport-transition-glqnd\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-glqnd\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-22s6g\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-22s6g\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-22s6g\naffinity-nodeport-transition-22s6g\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-glqnd\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-22s6g\naffinity-nodeport-transition-glqnd"
Apr 29 14:04:26.918: INFO: Received response from host: 
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-glqnd
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-glqnd
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-22s6g
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-22s6g
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-22s6g
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-22s6g
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-glqnd
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-22s6g
Apr 29 14:04:26.918: INFO: Received response from host: affinity-nodeport-transition-glqnd
Apr 29 14:04:26.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4032 execpod-affinityqvwts -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.15:32478/ ; done'
Apr 29 14:04:27.235: INFO: stderr: "I0429 14:04:27.078120    3133 log.go:172] (0xc0009f2e70) (0xc000a8e5a0) Create stream\nI0429 14:04:27.078188    3133 log.go:172] (0xc0009f2e70) (0xc000a8e5a0) Stream added, broadcasting: 1\nI0429 14:04:27.082582    3133 log.go:172] (0xc0009f2e70) Reply frame received for 1\nI0429 14:04:27.082644    3133 log.go:172] (0xc0009f2e70) (0xc000164f00) Create stream\nI0429 14:04:27.082674    3133 log.go:172] (0xc0009f2e70) (0xc000164f00) Stream added, broadcasting: 3\nI0429 14:04:27.083833    3133 log.go:172] (0xc0009f2e70) Reply frame received for 3\nI0429 14:04:27.083887    3133 log.go:172] (0xc0009f2e70) (0xc0004ec320) Create stream\nI0429 14:04:27.083902    3133 log.go:172] (0xc0009f2e70) (0xc0004ec320) Stream added, broadcasting: 5\nI0429 14:04:27.085305    3133 log.go:172] (0xc0009f2e70) Reply frame received for 5\nI0429 14:04:27.150573    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.150612    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.150634    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.150651    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.150658    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.150664    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.155689    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.155717    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.155748    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.156378    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.156399    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.156406    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\nI0429 14:04:27.156411    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.156415    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.156427    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\nI0429 14:04:27.156433    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.156438    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.156442    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.162213    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.162236    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.162262    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.162888    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.162899    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.162905    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.162928    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.162962    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.162985    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.166371    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.166385    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.166393    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.166835    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.166857    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.166866    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.166876    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.166882    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.166888    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.170463    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.170491    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.170522    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.171220    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.171243    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.171260    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.171286    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.171297    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.171313    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.175056    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.175088    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.175106    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.175170    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.175201    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.175225    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.175256    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.175279    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.175302    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.180067    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.180093    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.180109    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.180430    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.180441    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.180447    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.180457    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.180462    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.180466    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.184134    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.184166    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.184199    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.184694    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.184717    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.184736    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.184992    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.185011    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.185025    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.189910    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.189927    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.189937    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.190351    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.190369    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.190377    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.190384    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.190389    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.190393    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.194389    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.194432    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.194467    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.194643    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.194653    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.194658    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.194813    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.194836    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.194853    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.198278    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.198298    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.198315    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.198703    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.198731    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.198762    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.198786    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.198806    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.198821    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.202919    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.202939    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.202950    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.203374    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.203404    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.203418    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.203438    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.203448    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.203459    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.207713    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.207733    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.207751    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.208149    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.208175    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.208191    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\nI0429 14:04:27.208203    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.208215    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.208240    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\nI0429 14:04:27.208325    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.208348    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.208368    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.213746    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.213770    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.213782    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.214160    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.214189    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.214203    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.214220    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.214230    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.214240    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.219060    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.219083    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.219095    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.219586    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.219627    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.219644    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.219665    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.219674    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.219684    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.223877    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.223895    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.223905    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.224261    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.224292    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.224302    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.224315    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.224322    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.224335    3133 log.go:172] (0xc0004ec320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.15:32478/\nI0429 14:04:27.228433    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.228444    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.228453    3133 log.go:172] (0xc000164f00) (3) Data frame sent\nI0429 14:04:27.228961    3133 log.go:172] (0xc0009f2e70) Data frame received for 3\nI0429 14:04:27.228982    3133 log.go:172] (0xc000164f00) (3) Data frame handling\nI0429 14:04:27.229088    3133 log.go:172] (0xc0009f2e70) Data frame received for 5\nI0429 14:04:27.229106    3133 log.go:172] (0xc0004ec320) (5) Data frame handling\nI0429 14:04:27.230540    3133 log.go:172] (0xc0009f2e70) Data frame received for 1\nI0429 14:04:27.230561    3133 log.go:172] (0xc000a8e5a0) (1) Data frame handling\nI0429 14:04:27.230581    3133 log.go:172] (0xc000a8e5a0) (1) Data frame sent\nI0429 14:04:27.230646    3133 log.go:172] (0xc0009f2e70) (0xc000a8e5a0) Stream removed, broadcasting: 1\nI0429 14:04:27.230711    3133 log.go:172] (0xc0009f2e70) Go away received\nI0429 14:04:27.231008    3133 log.go:172] (0xc0009f2e70) (0xc000a8e5a0) Stream removed, broadcasting: 1\nI0429 14:04:27.231035    3133 log.go:172] (0xc0009f2e70) (0xc000164f00) Stream removed, broadcasting: 3\nI0429 14:04:27.231042    3133 log.go:172] (0xc0009f2e70) (0xc0004ec320) Stream removed, broadcasting: 5\n"
Apr 29 14:04:27.235: INFO: stdout: "\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf\naffinity-nodeport-transition-w2sxf"
Apr 29 14:04:27.236: INFO: Received response from host: 
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Received response from host: affinity-nodeport-transition-w2sxf
Apr 29 14:04:27.236: INFO: Cleaning up the exec pod
STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4032, will wait for the garbage collector to delete the pods
Apr 29 14:04:27.360: INFO: Deleting ReplicationController affinity-nodeport-transition took: 21.883892ms
Apr 29 14:04:27.960: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.220767ms
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:04:43.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4032" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:33.031 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":290,"completed":181,"skipped":2789,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:04:43.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Apr 29 14:04:43.695: INFO: Waiting up to 5m0s for pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422" in namespace "containers-5903" to be "Succeeded or Failed"
Apr 29 14:04:43.701: INFO: Pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164511ms
Apr 29 14:04:45.706: INFO: Pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011076106s
Apr 29 14:04:47.709: INFO: Pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422": Phase="Running", Reason="", readiness=true. Elapsed: 4.014368941s
Apr 29 14:04:49.713: INFO: Pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018227882s
STEP: Saw pod success
Apr 29 14:04:49.713: INFO: Pod "client-containers-c40dd362-68fb-4440-844c-ac7d260e8422" satisfied condition "Succeeded or Failed"
Apr 29 14:04:49.716: INFO: Trying to get logs from node kali-worker2 pod client-containers-c40dd362-68fb-4440-844c-ac7d260e8422 container test-container: 
STEP: delete the pod
Apr 29 14:04:49.766: INFO: Waiting for pod client-containers-c40dd362-68fb-4440-844c-ac7d260e8422 to disappear
Apr 29 14:04:49.779: INFO: Pod client-containers-c40dd362-68fb-4440-844c-ac7d260e8422 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:04:49.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5903" for this suite.

• [SLOW TEST:6.296 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":290,"completed":182,"skipped":2792,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:04:49.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:04:49.894: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1efdf3eb-cf42-44c4-8f34-7746a017cb77
STEP: Creating a pod to test consume secrets
Apr 29 14:04:50.122: INFO: Waiting up to 5m0s for pod "pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a" in namespace "secrets-4039" to be "Succeeded or Failed"
Apr 29 14:04:50.126: INFO: Pod "pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753779ms
Apr 29 14:04:52.184: INFO: Pod "pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061260786s
Apr 29 14:04:54.188: INFO: Pod "pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065757417s
STEP: Saw pod success
Apr 29 14:04:54.188: INFO: Pod "pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a" satisfied condition "Succeeded or Failed"
Apr 29 14:04:54.192: INFO: Trying to get logs from node kali-worker pod pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a container secret-volume-test: 
STEP: delete the pod
Apr 29 14:04:54.240: INFO: Waiting for pod pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a to disappear
Apr 29 14:04:54.261: INFO: Pod pod-secrets-b878902d-cccf-4b95-9494-165e2f8ffc7a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:04:54.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4039" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":184,"skipped":2870,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:04:54.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-db38979a-f279-47cf-8ce7-20aefd468366
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:04:54.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4497" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":290,"completed":185,"skipped":2922,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:04:54.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393
STEP: creating an pod
Apr 29 14:04:54.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-8312 -- logs-generator --log-lines-total 100 --run-duration 20s'
Apr 29 14:04:54.577: INFO: stderr: ""
Apr 29 14:04:54.577: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Waiting for log generator to start.
Apr 29 14:04:54.577: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Apr 29 14:04:54.577: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8312" to be "running and ready, or succeeded"
Apr 29 14:04:54.603: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 25.14827ms
Apr 29 14:04:56.633: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05550042s
Apr 29 14:04:58.637: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.059444992s
Apr 29 14:04:58.637: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Apr 29 14:04:58.637: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Apr 29 14:04:58.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312'
Apr 29 14:04:58.753: INFO: stderr: ""
Apr 29 14:04:58.753: INFO: stdout: "I0429 14:04:57.675656       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/6wh 460\nI0429 14:04:57.875800       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/xkx 570\nI0429 14:04:58.075831       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/hwf6 434\nI0429 14:04:58.275810       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kdfc 582\nI0429 14:04:58.475817       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/9dc 390\nI0429 14:04:58.675844       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kgjb 487\n"
STEP: limiting log lines
Apr 29 14:04:58.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312 --tail=1'
Apr 29 14:04:58.861: INFO: stderr: ""
Apr 29 14:04:58.861: INFO: stdout: "I0429 14:04:58.675844       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kgjb 487\n"
Apr 29 14:04:58.861: INFO: got output "I0429 14:04:58.675844       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kgjb 487\n"
STEP: limiting log bytes
Apr 29 14:04:58.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312 --limit-bytes=1'
Apr 29 14:04:58.979: INFO: stderr: ""
Apr 29 14:04:58.979: INFO: stdout: "I"
Apr 29 14:04:58.979: INFO: got output "I"
STEP: exposing timestamps
Apr 29 14:04:58.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312 --tail=1 --timestamps'
Apr 29 14:04:59.096: INFO: stderr: ""
Apr 29 14:04:59.097: INFO: stdout: "2020-04-29T14:04:59.075999651Z I0429 14:04:59.075816       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/q86 208\n"
Apr 29 14:04:59.097: INFO: got output "2020-04-29T14:04:59.075999651Z I0429 14:04:59.075816       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/q86 208\n"
STEP: restricting to a time range
Apr 29 14:05:01.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312 --since=1s'
Apr 29 14:05:01.708: INFO: stderr: ""
Apr 29 14:05:01.708: INFO: stdout: "I0429 14:05:00.875884       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5q8 543\nI0429 14:05:01.075871       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/ctgc 294\nI0429 14:05:01.275829       1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/l5g 329\nI0429 14:05:01.475841       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/lc26 516\nI0429 14:05:01.675814       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/7tw 260\n"
Apr 29 14:05:01.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8312 --since=24h'
Apr 29 14:05:01.831: INFO: stderr: ""
Apr 29 14:05:01.831: INFO: stdout: "I0429 14:04:57.675656       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/6wh 460\nI0429 14:04:57.875800       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/xkx 570\nI0429 14:04:58.075831       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/hwf6 434\nI0429 14:04:58.275810       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kdfc 582\nI0429 14:04:58.475817       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/9dc 390\nI0429 14:04:58.675844       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/kgjb 487\nI0429 14:04:58.875837       1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/xsm 354\nI0429 14:04:59.075816       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/q86 208\nI0429 14:04:59.275873       1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/5nr 400\nI0429 14:04:59.475831       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/ttw 202\nI0429 14:04:59.675870       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/tgc7 442\nI0429 14:04:59.875826       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/qqck 511\nI0429 14:05:00.075831       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/lm7 558\nI0429 14:05:00.275871       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/zp5 484\nI0429 14:05:00.475804       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/pkc 230\nI0429 14:05:00.675811       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/xtq 444\nI0429 14:05:00.875884       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5q8 543\nI0429 14:05:01.075871       1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/ctgc 294\nI0429 14:05:01.275829       1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/l5g 329\nI0429 14:05:01.475841       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/lc26 516\nI0429 14:05:01.675814       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/7tw 260\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
Apr 29 14:05:01.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8312'
Apr 29 14:05:04.562: INFO: stderr: ""
Apr 29 14:05:04.562: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:04.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8312" for this suite.

• [SLOW TEST:10.183 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":290,"completed":186,"skipped":2925,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:04.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-upd-62f1779d-6132-4d64-8303-ba2b9dd9d0a6
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:08.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-664" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":187,"skipped":3039,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:08.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:20.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1627" for this suite.

• [SLOW TEST:11.236 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":290,"completed":188,"skipped":3046,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:20.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating the pod
Apr 29 14:05:24.664: INFO: Successfully updated pod "labelsupdate0a3cd726-83d8-4270-a9d8-ad83221aa875"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:28.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5533" for this suite.

• [SLOW TEST:8.689 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":290,"completed":189,"skipped":3122,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:28.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:05:28.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40" in namespace "projected-1791" to be "Succeeded or Failed"
Apr 29 14:05:28.840: INFO: Pod "downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40": Phase="Pending", Reason="", readiness=false. Elapsed: 15.034076ms
Apr 29 14:05:30.844: INFO: Pod "downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019322159s
Apr 29 14:05:32.848: INFO: Pod "downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023115134s
STEP: Saw pod success
Apr 29 14:05:32.848: INFO: Pod "downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40" satisfied condition "Succeeded or Failed"
Apr 29 14:05:32.850: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40 container client-container: 
STEP: delete the pod
Apr 29 14:05:33.007: INFO: Waiting for pod downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40 to disappear
Apr 29 14:05:33.056: INFO: Pod downwardapi-volume-d2f65127-1637-4615-99ad-b3c5f949fd40 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:33.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1791" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":190,"skipped":3132,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:33.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-750ed68d-3655-4713-941c-877f27f2b6a3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:33.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9025" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":290,"completed":191,"skipped":3137,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:33.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-16f1d939-8e2b-486e-a189-f913491e56f9
STEP: Creating a pod to test consume configMaps
Apr 29 14:05:33.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d" in namespace "projected-6770" to be "Succeeded or Failed"
Apr 29 14:05:34.257: INFO: Pod "pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d": Phase="Pending", Reason="", readiness=false. Elapsed: 282.81465ms
Apr 29 14:05:36.298: INFO: Pod "pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324191596s
Apr 29 14:05:38.328: INFO: Pod "pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.354237932s
STEP: Saw pod success
Apr 29 14:05:38.329: INFO: Pod "pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d" satisfied condition "Succeeded or Failed"
Apr 29 14:05:38.331: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 14:05:38.394: INFO: Waiting for pod pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d to disappear
Apr 29 14:05:38.403: INFO: Pod pod-projected-configmaps-851cd284-30d1-4099-8ca8-bc487aa3044d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:38.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6770" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":290,"completed":192,"skipped":3144,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:38.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:05:38.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d" in namespace "downward-api-8189" to be "Succeeded or Failed"
Apr 29 14:05:38.510: INFO: Pod "downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.61082ms
Apr 29 14:05:40.518: INFO: Pod "downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040760235s
Apr 29 14:05:42.534: INFO: Pod "downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05657971s
STEP: Saw pod success
Apr 29 14:05:42.534: INFO: Pod "downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d" satisfied condition "Succeeded or Failed"
Apr 29 14:05:42.567: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d container client-container: 
STEP: delete the pod
Apr 29 14:05:42.585: INFO: Waiting for pod downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d to disappear
Apr 29 14:05:42.614: INFO: Pod downwardapi-volume-e3d1e45b-d8f2-498e-b9fa-4d4ae30d042d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:05:42.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8189" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":193,"skipped":3150,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:05:42.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:05:42.770: INFO: Create a RollingUpdate DaemonSet
Apr 29 14:05:42.774: INFO: Check that daemon pods launch on every node of the cluster
Apr 29 14:05:42.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:42.813: INFO: Number of nodes with available pods: 0
Apr 29 14:05:42.813: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:05:43.817: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:43.821: INFO: Number of nodes with available pods: 0
Apr 29 14:05:43.821: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:05:44.861: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:44.864: INFO: Number of nodes with available pods: 0
Apr 29 14:05:44.864: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:05:45.873: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:45.877: INFO: Number of nodes with available pods: 0
Apr 29 14:05:45.877: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:05:46.817: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:46.821: INFO: Number of nodes with available pods: 2
Apr 29 14:05:46.821: INFO: Number of running nodes: 2, number of available pods: 2
Apr 29 14:05:46.821: INFO: Update the DaemonSet to trigger a rollout
Apr 29 14:05:46.873: INFO: Updating DaemonSet daemon-set
Apr 29 14:05:53.890: INFO: Roll back the DaemonSet before rollout is complete
Apr 29 14:05:53.898: INFO: Updating DaemonSet daemon-set
Apr 29 14:05:53.898: INFO: Make sure DaemonSet rollback is complete
Apr 29 14:05:53.933: INFO: Wrong image for pod: daemon-set-xq954. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Apr 29 14:05:53.933: INFO: Pod daemon-set-xq954 is not available
Apr 29 14:05:53.936: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:54.962: INFO: Wrong image for pod: daemon-set-xq954. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Apr 29 14:05:54.962: INFO: Pod daemon-set-xq954 is not available
Apr 29 14:05:54.966: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:55.941: INFO: Wrong image for pod: daemon-set-xq954. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Apr 29 14:05:55.941: INFO: Pod daemon-set-xq954 is not available
Apr 29 14:05:55.944: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:05:56.962: INFO: Pod daemon-set-wjkwb is not available
Apr 29 14:05:56.966: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2340, will wait for the garbage collector to delete the pods
Apr 29 14:05:57.041: INFO: Deleting DaemonSet.extensions daemon-set took: 5.719112ms
Apr 29 14:05:57.441: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.246863ms
Apr 29 14:06:03.751: INFO: Number of nodes with available pods: 0
Apr 29 14:06:03.751: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 14:06:03.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2340/daemonsets","resourceVersion":"76865"},"items":null}

Apr 29 14:06:03.756: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2340/pods","resourceVersion":"76865"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:03.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2340" for this suite.

• [SLOW TEST:21.149 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":290,"completed":194,"skipped":3186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:03.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Apr 29 14:06:10.413: INFO: Successfully updated pod "adopt-release-8fc5q"
STEP: Checking that the Job readopts the Pod
Apr 29 14:06:10.413: INFO: Waiting up to 15m0s for pod "adopt-release-8fc5q" in namespace "job-3252" to be "adopted"
Apr 29 14:06:10.459: INFO: Pod "adopt-release-8fc5q": Phase="Running", Reason="", readiness=true. Elapsed: 45.391708ms
Apr 29 14:06:12.463: INFO: Pod "adopt-release-8fc5q": Phase="Running", Reason="", readiness=true. Elapsed: 2.049692483s
Apr 29 14:06:12.463: INFO: Pod "adopt-release-8fc5q" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Apr 29 14:06:13.035: INFO: Successfully updated pod "adopt-release-8fc5q"
STEP: Checking that the Job releases the Pod
Apr 29 14:06:13.035: INFO: Waiting up to 15m0s for pod "adopt-release-8fc5q" in namespace "job-3252" to be "released"
Apr 29 14:06:13.056: INFO: Pod "adopt-release-8fc5q": Phase="Running", Reason="", readiness=true. Elapsed: 21.403354ms
Apr 29 14:06:15.060: INFO: Pod "adopt-release-8fc5q": Phase="Running", Reason="", readiness=true. Elapsed: 2.025484863s
Apr 29 14:06:15.061: INFO: Pod "adopt-release-8fc5q" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:15.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3252" for this suite.

• [SLOW TEST:11.296 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":290,"completed":195,"skipped":3229,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:15.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
Apr 29 14:06:15.464: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 14:06:15.474: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 14:06:15.476: INFO: 
Logging pods the apiserver thinks is on node kali-worker before test
Apr 29 14:06:15.481: INFO: adopt-release-d6p2q from job-3252 started at 2020-04-29 14:06:13 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.481: INFO: 	Container c ready: false, restart count 0
Apr 29 14:06:15.481: INFO: adopt-release-dz67k from job-3252 started at 2020-04-29 14:06:04 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.481: INFO: 	Container c ready: true, restart count 0
Apr 29 14:06:15.481: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.481: INFO: 	Container kindnet-cni ready: true, restart count 1
Apr 29 14:06:15.481: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.481: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 14:06:15.481: INFO: 
Logging pods the apiserver thinks is on node kali-worker2 before test
Apr 29 14:06:15.485: INFO: adopt-release-8fc5q from job-3252 started at 2020-04-29 14:06:04 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.485: INFO: 	Container c ready: true, restart count 0
Apr 29 14:06:15.485: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.485: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 14:06:15.485: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:06:15.485: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Apr 29 14:06:17.050: INFO: Pod adopt-release-8fc5q requesting resource cpu=0m on Node kali-worker2
Apr 29 14:06:17.050: INFO: Pod adopt-release-d6p2q requesting resource cpu=0m on Node kali-worker
Apr 29 14:06:17.050: INFO: Pod adopt-release-dz67k requesting resource cpu=0m on Node kali-worker
Apr 29 14:06:17.050: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
Apr 29 14:06:17.050: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
Apr 29 14:06:17.050: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
Apr 29 14:06:17.050: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Apr 29 14:06:17.050: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Apr 29 14:06:17.124: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050.160a4f73caf353dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9992/filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050.160a4f7421123078], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050.160a4f748bebbee2], Reason = [Created], Message = [Created container filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050.160a4f74a0589ac6], Reason = [Started], Message = [Started container filler-pod-3d9a5b83-be2d-4fed-a425-a1179e5c4050]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b.160a4f73cee7123d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9992/filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b.160a4f744c2a43a5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b.160a4f749c33167a], Reason = [Created], Message = [Created container filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b.160a4f74aab807c9], Reason = [Started], Message = [Started container filler-pod-62236251-bd09-493c-b66c-eab3d3c2a60b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.160a4f7535de81bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:24.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9992" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81

• [SLOW TEST:9.479 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":290,"completed":196,"skipped":3246,"failed":0}
SSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:24.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename podtemplate
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run the lifecycle of PodTemplates [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:24.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-987" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":290,"completed":197,"skipped":3250,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:24.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Apr 29 14:06:24.785: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 14:06:27.710: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:38.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3243" for this suite.

• [SLOW TEST:13.739 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":290,"completed":198,"skipped":3250,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:38.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:06:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4332" for this suite.

• [SLOW TEST:16.779 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":290,"completed":199,"skipped":3252,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:06:55.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Apr 29 14:06:55.298: INFO: Pod name pod-release: Found 0 pods out of 1
Apr 29 14:07:00.322: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:00.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3980" for this suite.

• [SLOW TEST:5.707 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":290,"completed":200,"skipped":3258,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:00.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Apr 29 14:07:01.215: INFO: Waiting up to 5m0s for pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d" in namespace "containers-26" to be "Succeeded or Failed"
Apr 29 14:07:01.263: INFO: Pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.209852ms
Apr 29 14:07:03.268: INFO: Pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053005017s
Apr 29 14:07:05.272: INFO: Pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057480601s
Apr 29 14:07:07.292: INFO: Pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077535351s
STEP: Saw pod success
Apr 29 14:07:07.292: INFO: Pod "client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d" satisfied condition "Succeeded or Failed"
Apr 29 14:07:07.295: INFO: Trying to get logs from node kali-worker2 pod client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d container test-container: 
STEP: delete the pod
Apr 29 14:07:07.336: INFO: Waiting for pod client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d to disappear
Apr 29 14:07:07.353: INFO: Pod client-containers-3e8a43de-6c9e-42d8-ba84-a15d1996122d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:07.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-26" for this suite.

• [SLOW TEST:6.576 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":290,"completed":201,"skipped":3260,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:07.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:07:07.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:11.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2399" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":290,"completed":202,"skipped":3265,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:11.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Starting the proxy
Apr 29 14:07:11.728: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix787899771/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:11.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8781" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":290,"completed":203,"skipped":3272,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:11.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:07:15.988: INFO: Waiting up to 5m0s for pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199" in namespace "pods-2555" to be "Succeeded or Failed"
Apr 29 14:07:16.015: INFO: Pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199": Phase="Pending", Reason="", readiness=false. Elapsed: 26.857417ms
Apr 29 14:07:19.653: INFO: Pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664467379s
Apr 29 14:07:21.658: INFO: Pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199": Phase="Pending", Reason="", readiness=false. Elapsed: 5.669392687s
Apr 29 14:07:23.662: INFO: Pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.673846359s
STEP: Saw pod success
Apr 29 14:07:23.662: INFO: Pod "client-envvars-8f6b4384-447c-4e02-9cb9-958794677199" satisfied condition "Succeeded or Failed"
Apr 29 14:07:23.665: INFO: Trying to get logs from node kali-worker2 pod client-envvars-8f6b4384-447c-4e02-9cb9-958794677199 container env3cont: 
STEP: delete the pod
Apr 29 14:07:23.732: INFO: Waiting for pod client-envvars-8f6b4384-447c-4e02-9cb9-958794677199 to disappear
Apr 29 14:07:23.820: INFO: Pod client-envvars-8f6b4384-447c-4e02-9cb9-958794677199 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:23.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2555" for this suite.

• [SLOW TEST:12.023 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":290,"completed":204,"skipped":3297,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:23.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Apr 29 14:07:24.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3598 /api/v1/namespaces/watch-3598/configmaps/e2e-watch-test-watch-closed 8332ae05-c3f0-4c22-bce8-c3321b63a2e8 77447 0 2020-04-29 14:07:24 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-04-29 14:07:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:07:24.453: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3598 /api/v1/namespaces/watch-3598/configmaps/e2e-watch-test-watch-closed 8332ae05-c3f0-4c22-bce8-c3321b63a2e8 77448 0 2020-04-29 14:07:24 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-04-29 14:07:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Apr 29 14:07:24.469: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3598 /api/v1/namespaces/watch-3598/configmaps/e2e-watch-test-watch-closed 8332ae05-c3f0-4c22-bce8-c3321b63a2e8 77449 0 2020-04-29 14:07:24 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-04-29 14:07:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Apr 29 14:07:24.469: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-3598 /api/v1/namespaces/watch-3598/configmaps/e2e-watch-test-watch-closed 8332ae05-c3f0-4c22-bce8-c3321b63a2e8 77450 0 2020-04-29 14:07:24 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-04-29 14:07:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:24.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3598" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":290,"completed":205,"skipped":3316,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:24.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3231
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3231
STEP: creating replication controller externalsvc in namespace services-3231
I0429 14:07:24.880082       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3231, replica count: 2
I0429 14:07:27.930463       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:07:30.930698       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Apr 29 14:07:30.989: INFO: Creating new exec pod
Apr 29 14:07:35.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3231 execpoddthms -- /bin/sh -x -c nslookup clusterip-service'
Apr 29 14:07:38.930: INFO: stderr: "I0429 14:07:38.740916    3331 log.go:172] (0xc000d2e000) (0xc000652aa0) Create stream\nI0429 14:07:38.740953    3331 log.go:172] (0xc000d2e000) (0xc000652aa0) Stream added, broadcasting: 1\nI0429 14:07:38.745414    3331 log.go:172] (0xc000d2e000) Reply frame received for 1\nI0429 14:07:38.745509    3331 log.go:172] (0xc000d2e000) (0xc00063cd20) Create stream\nI0429 14:07:38.745546    3331 log.go:172] (0xc000d2e000) (0xc00063cd20) Stream added, broadcasting: 3\nI0429 14:07:38.747408    3331 log.go:172] (0xc000d2e000) Reply frame received for 3\nI0429 14:07:38.747469    3331 log.go:172] (0xc000d2e000) (0xc000652fa0) Create stream\nI0429 14:07:38.747484    3331 log.go:172] (0xc000d2e000) (0xc000652fa0) Stream added, broadcasting: 5\nI0429 14:07:38.748907    3331 log.go:172] (0xc000d2e000) Reply frame received for 5\nI0429 14:07:38.823857    3331 log.go:172] (0xc000d2e000) Data frame received for 5\nI0429 14:07:38.823889    3331 log.go:172] (0xc000652fa0) (5) Data frame handling\nI0429 14:07:38.823911    3331 log.go:172] (0xc000652fa0) (5) Data frame sent\n+ nslookup clusterip-service\nI0429 14:07:38.919956    3331 log.go:172] (0xc000d2e000) Data frame received for 3\nI0429 14:07:38.920002    3331 log.go:172] (0xc00063cd20) (3) Data frame handling\nI0429 14:07:38.920096    3331 log.go:172] (0xc00063cd20) (3) Data frame sent\nI0429 14:07:38.921744    3331 log.go:172] (0xc000d2e000) Data frame received for 3\nI0429 14:07:38.921787    3331 log.go:172] (0xc00063cd20) (3) Data frame handling\nI0429 14:07:38.921829    3331 log.go:172] (0xc00063cd20) (3) Data frame sent\nI0429 14:07:38.922368    3331 log.go:172] (0xc000d2e000) Data frame received for 5\nI0429 14:07:38.922404    3331 log.go:172] (0xc000652fa0) (5) Data frame handling\nI0429 14:07:38.922442    3331 log.go:172] (0xc000d2e000) Data frame received for 3\nI0429 14:07:38.922478    3331 log.go:172] (0xc00063cd20) (3) Data frame handling\nI0429 14:07:38.924698    3331 log.go:172] (0xc000d2e000) Data frame received for 1\nI0429 14:07:38.924736    3331 log.go:172] (0xc000652aa0) (1) Data frame handling\nI0429 14:07:38.924773    3331 log.go:172] (0xc000652aa0) (1) Data frame sent\nI0429 14:07:38.924798    3331 log.go:172] (0xc000d2e000) (0xc000652aa0) Stream removed, broadcasting: 1\nI0429 14:07:38.925609    3331 log.go:172] (0xc000d2e000) (0xc000652aa0) Stream removed, broadcasting: 1\nI0429 14:07:38.925635    3331 log.go:172] (0xc000d2e000) (0xc00063cd20) Stream removed, broadcasting: 3\nI0429 14:07:38.925859    3331 log.go:172] (0xc000d2e000) (0xc000652fa0) Stream removed, broadcasting: 5\n"
Apr 29 14:07:38.930: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3231.svc.cluster.local\tcanonical name = externalsvc.services-3231.svc.cluster.local.\nName:\texternalsvc.services-3231.svc.cluster.local\nAddress: 10.100.248.166\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3231, will wait for the garbage collector to delete the pods
Apr 29 14:07:39.123: INFO: Deleting ReplicationController externalsvc took: 7.448691ms
Apr 29 14:07:39.523: INFO: Terminating ReplicationController externalsvc pods took: 400.31505ms
Apr 29 14:07:44.339: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:44.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3231" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:19.951 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":290,"completed":206,"skipped":3319,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:44.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:07:45.239: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:07:47.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:07:49.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766065, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:07:52.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7934" for this suite.
STEP: Destroying namespace "webhook-7934-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.120 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":290,"completed":207,"skipped":3324,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:52.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 29 14:07:52.646: INFO: Waiting up to 5m0s for pod "pod-acdcd287-8877-4dbd-b557-c0035aaa0026" in namespace "emptydir-8214" to be "Succeeded or Failed"
Apr 29 14:07:52.648: INFO: Pod "pod-acdcd287-8877-4dbd-b557-c0035aaa0026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288056ms
Apr 29 14:07:54.688: INFO: Pod "pod-acdcd287-8877-4dbd-b557-c0035aaa0026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041701369s
Apr 29 14:07:56.692: INFO: Pod "pod-acdcd287-8877-4dbd-b557-c0035aaa0026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045892483s
STEP: Saw pod success
Apr 29 14:07:56.692: INFO: Pod "pod-acdcd287-8877-4dbd-b557-c0035aaa0026" satisfied condition "Succeeded or Failed"
Apr 29 14:07:56.695: INFO: Trying to get logs from node kali-worker2 pod pod-acdcd287-8877-4dbd-b557-c0035aaa0026 container test-container: 
STEP: delete the pod
Apr 29 14:07:56.731: INFO: Waiting for pod pod-acdcd287-8877-4dbd-b557-c0035aaa0026 to disappear
Apr 29 14:07:56.739: INFO: Pod pod-acdcd287-8877-4dbd-b557-c0035aaa0026 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:07:56.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8214" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":208,"skipped":3335,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:07:56.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:07:57.344: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:07:59.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:08:01.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723766077, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:08:04.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:08:04.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4039-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:05.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8640" for this suite.
STEP: Destroying namespace "webhook-8640-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.093 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":290,"completed":209,"skipped":3346,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:05.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:08:05.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53" in namespace "downward-api-9516" to be "Succeeded or Failed"
Apr 29 14:08:05.971: INFO: Pod "downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53": Phase="Pending", Reason="", readiness=false. Elapsed: 48.763194ms
Apr 29 14:08:07.976: INFO: Pod "downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0529143s
Apr 29 14:08:09.980: INFO: Pod "downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057346412s
STEP: Saw pod success
Apr 29 14:08:09.980: INFO: Pod "downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53" satisfied condition "Succeeded or Failed"
Apr 29 14:08:09.983: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53 container client-container: 
STEP: delete the pod
Apr 29 14:08:10.022: INFO: Waiting for pod downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53 to disappear
Apr 29 14:08:10.044: INFO: Pod downwardapi-volume-24d39c9d-9bb3-4c99-b3b3-bfe9e3f45d53 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:10.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9516" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":210,"skipped":3352,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:10.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:08:10.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70" in namespace "projected-9630" to be "Succeeded or Failed"
Apr 29 14:08:10.458: INFO: Pod "downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70": Phase="Pending", Reason="", readiness=false. Elapsed: 12.704031ms
Apr 29 14:08:12.507: INFO: Pod "downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061417818s
Apr 29 14:08:14.511: INFO: Pod "downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065300052s
STEP: Saw pod success
Apr 29 14:08:14.511: INFO: Pod "downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70" satisfied condition "Succeeded or Failed"
Apr 29 14:08:14.513: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70 container client-container: 
STEP: delete the pod
Apr 29 14:08:14.648: INFO: Waiting for pod downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70 to disappear
Apr 29 14:08:14.695: INFO: Pod downwardapi-volume-10478f17-7f14-4850-bd1d-e92d6ab66d70 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:14.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9630" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":290,"completed":211,"skipped":3360,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:14.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90
Apr 29 14:08:14.845: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 14:08:14.853: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 14:08:14.855: INFO: 
Logging pods the apiserver thinks is on node kali-worker before test
Apr 29 14:08:14.859: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:08:14.859: INFO: 	Container kindnet-cni ready: true, restart count 1
Apr 29 14:08:14.859: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:08:14.859: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 14:08:14.859: INFO: 
Logging pods the apiserver thinks is on node kali-worker2 before test
Apr 29 14:08:14.862: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:08:14.862: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 14:08:14.862: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
Apr 29 14:08:14.862: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7c5c25e6-0b97-44df-aa60-587d0d78ee09 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-7c5c25e6-0b97-44df-aa60-587d0d78ee09 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7c5c25e6-0b97-44df-aa60-587d0d78ee09
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:32.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9222" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81

• [SLOW TEST:17.853 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":290,"completed":212,"skipped":3420,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:32.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating secret secrets-8912/secret-test-529e3047-2e37-48c8-a96e-08e1305dce86
STEP: Creating a pod to test consume secrets
Apr 29 14:08:32.699: INFO: Waiting up to 5m0s for pod "pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d" in namespace "secrets-8912" to be "Succeeded or Failed"
Apr 29 14:08:32.710: INFO: Pod "pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.173574ms
Apr 29 14:08:34.748: INFO: Pod "pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04916387s
Apr 29 14:08:36.752: INFO: Pod "pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053298837s
STEP: Saw pod success
Apr 29 14:08:36.752: INFO: Pod "pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d" satisfied condition "Succeeded or Failed"
Apr 29 14:08:36.754: INFO: Trying to get logs from node kali-worker pod pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d container env-test: 
STEP: delete the pod
Apr 29 14:08:36.779: INFO: Waiting for pod pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d to disappear
Apr 29 14:08:36.782: INFO: Pod pod-configmaps-5eac8e81-0d9a-4839-bf41-8258aea98a4d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:36.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8912" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":290,"completed":213,"skipped":3441,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:36.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Apr 29 14:08:43.389: INFO: Successfully updated pod "pod-update-96c58f64-ef51-48d7-a89f-0ce734d83b54"
STEP: verifying the updated pod is in kubernetes
Apr 29 14:08:43.421: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:43.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4919" for this suite.

• [SLOW TEST:6.639 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":290,"completed":214,"skipped":3444,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:43.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0429 14:08:53.854213       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 14:08:53.854: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:08:53.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4731" for this suite.

• [SLOW TEST:10.430 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":290,"completed":215,"skipped":3445,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:08:53.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Apr 29 14:10:54.710: INFO: Successfully updated pod "var-expansion-b436cad4-24ba-4453-9b73-d778e5a27ecf"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Apr 29 14:10:58.716: INFO: Deleting pod "var-expansion-b436cad4-24ba-4453-9b73-d778e5a27ecf" in namespace "var-expansion-3274"
Apr 29 14:10:58.721: INFO: Wait up to 5m0s for pod "var-expansion-b436cad4-24ba-4453-9b73-d778e5a27ecf" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:11:34.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3274" for this suite.

• [SLOW TEST:160.884 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":290,"completed":216,"skipped":3453,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:11:34.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Apr 29 14:11:39.898: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:11:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2857" for this suite.

• [SLOW TEST:5.266 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":290,"completed":217,"skipped":3455,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:11:40.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Apr 29 14:11:40.125: INFO: Waiting up to 5m0s for pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047" in namespace "emptydir-5721" to be "Succeeded or Failed"
Apr 29 14:11:40.170: INFO: Pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047": Phase="Pending", Reason="", readiness=false. Elapsed: 45.109727ms
Apr 29 14:11:42.175: INFO: Pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049198927s
Apr 29 14:11:44.205: INFO: Pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047": Phase="Running", Reason="", readiness=true. Elapsed: 4.079641475s
Apr 29 14:11:46.209: INFO: Pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083902499s
STEP: Saw pod success
Apr 29 14:11:46.209: INFO: Pod "pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047" satisfied condition "Succeeded or Failed"
Apr 29 14:11:46.212: INFO: Trying to get logs from node kali-worker2 pod pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047 container test-container: 
STEP: delete the pod
Apr 29 14:11:46.275: INFO: Waiting for pod pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047 to disappear
Apr 29 14:11:46.306: INFO: Pod pod-e8ca5d05-7d30-41a2-bfe9-a40df34c4047 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:11:46.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5721" for this suite.

• [SLOW TEST:6.327 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":218,"skipped":3461,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:11:46.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-b3387207-af4e-4c1e-bd12-228d7503de99
STEP: Creating a pod to test consume secrets
Apr 29 14:11:46.502: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b" in namespace "projected-3486" to be "Succeeded or Failed"
Apr 29 14:11:46.516: INFO: Pod "pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.585064ms
Apr 29 14:11:48.519: INFO: Pod "pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017042885s
Apr 29 14:11:50.523: INFO: Pod "pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020892812s
STEP: Saw pod success
Apr 29 14:11:50.523: INFO: Pod "pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b" satisfied condition "Succeeded or Failed"
Apr 29 14:11:50.526: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 14:11:50.730: INFO: Waiting for pod pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b to disappear
Apr 29 14:11:50.749: INFO: Pod pod-projected-secrets-4d2d380e-1419-4343-8ea3-946be39cba5b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:11:50.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3486" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":290,"completed":219,"skipped":3464,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:11:50.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Apr 29 14:11:50.863: INFO: Waiting up to 5m0s for pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e" in namespace "var-expansion-997" to be "Succeeded or Failed"
Apr 29 14:11:50.890: INFO: Pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.496767ms
Apr 29 14:11:52.896: INFO: Pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033191301s
Apr 29 14:11:54.900: INFO: Pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036969557s
Apr 29 14:11:56.904: INFO: Pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041521046s
STEP: Saw pod success
Apr 29 14:11:56.904: INFO: Pod "var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e" satisfied condition "Succeeded or Failed"
Apr 29 14:11:56.907: INFO: Trying to get logs from node kali-worker2 pod var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e container dapi-container: 
STEP: delete the pod
Apr 29 14:11:56.978: INFO: Waiting for pod var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e to disappear
Apr 29 14:11:56.991: INFO: Pod var-expansion-cdad8612-60ad-4cbf-9372-185b6e1eea1e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:11:56.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-997" for this suite.

• [SLOW TEST:6.241 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":290,"completed":220,"skipped":3473,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:11:56.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 14:12:05.111: INFO: DNS probes using dns-5330/dns-test-b1ecb572-e200-499d-b75c-4a14b9b2d592 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:05.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5330" for this suite.

• [SLOW TEST:8.633 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":290,"completed":221,"skipped":3486,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:05.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7541
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7541
STEP: creating replication controller externalsvc in namespace services-7541
I0429 14:12:06.559654       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7541, replica count: 2
I0429 14:12:09.610117       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 14:12:12.610373       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Apr 29 14:12:12.738: INFO: Creating new exec pod
Apr 29 14:12:18.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-7541 execpodqrr2h -- /bin/sh -x -c nslookup nodeport-service'
Apr 29 14:12:18.979: INFO: stderr: "I0429 14:12:18.890184    3363 log.go:172] (0xc0009bafd0) (0xc000ab2640) Create stream\nI0429 14:12:18.890240    3363 log.go:172] (0xc0009bafd0) (0xc000ab2640) Stream added, broadcasting: 1\nI0429 14:12:18.893891    3363 log.go:172] (0xc0009bafd0) Reply frame received for 1\nI0429 14:12:18.893920    3363 log.go:172] (0xc0009bafd0) (0xc0003fa460) Create stream\nI0429 14:12:18.893927    3363 log.go:172] (0xc0009bafd0) (0xc0003fa460) Stream added, broadcasting: 3\nI0429 14:12:18.894849    3363 log.go:172] (0xc0009bafd0) Reply frame received for 3\nI0429 14:12:18.894894    3363 log.go:172] (0xc0009bafd0) (0xc0003b0fa0) Create stream\nI0429 14:12:18.894916    3363 log.go:172] (0xc0009bafd0) (0xc0003b0fa0) Stream added, broadcasting: 5\nI0429 14:12:18.895886    3363 log.go:172] (0xc0009bafd0) Reply frame received for 5\nI0429 14:12:18.964913    3363 log.go:172] (0xc0009bafd0) Data frame received for 5\nI0429 14:12:18.964940    3363 log.go:172] (0xc0003b0fa0) (5) Data frame handling\nI0429 14:12:18.964958    3363 log.go:172] (0xc0003b0fa0) (5) Data frame sent\n+ nslookup nodeport-service\nI0429 14:12:18.971643    3363 log.go:172] (0xc0009bafd0) Data frame received for 3\nI0429 14:12:18.971661    3363 log.go:172] (0xc0003fa460) (3) Data frame handling\nI0429 14:12:18.971674    3363 log.go:172] (0xc0003fa460) (3) Data frame sent\nI0429 14:12:18.972589    3363 log.go:172] (0xc0009bafd0) Data frame received for 3\nI0429 14:12:18.972617    3363 log.go:172] (0xc0003fa460) (3) Data frame handling\nI0429 14:12:18.972638    3363 log.go:172] (0xc0003fa460) (3) Data frame sent\nI0429 14:12:18.972905    3363 log.go:172] (0xc0009bafd0) Data frame received for 5\nI0429 14:12:18.972925    3363 log.go:172] (0xc0003b0fa0) (5) Data frame handling\nI0429 14:12:18.973107    3363 log.go:172] (0xc0009bafd0) Data frame received for 3\nI0429 14:12:18.973239    3363 log.go:172] (0xc0003fa460) (3) Data frame handling\nI0429 14:12:18.974773    3363 log.go:172] (0xc0009bafd0) Data frame received for 1\nI0429 14:12:18.974820    3363 log.go:172] (0xc000ab2640) (1) Data frame handling\nI0429 14:12:18.974855    3363 log.go:172] (0xc000ab2640) (1) Data frame sent\nI0429 14:12:18.974885    3363 log.go:172] (0xc0009bafd0) (0xc000ab2640) Stream removed, broadcasting: 1\nI0429 14:12:18.974916    3363 log.go:172] (0xc0009bafd0) Go away received\nI0429 14:12:18.975214    3363 log.go:172] (0xc0009bafd0) (0xc000ab2640) Stream removed, broadcasting: 1\nI0429 14:12:18.975237    3363 log.go:172] (0xc0009bafd0) (0xc0003fa460) Stream removed, broadcasting: 3\nI0429 14:12:18.975246    3363 log.go:172] (0xc0009bafd0) (0xc0003b0fa0) Stream removed, broadcasting: 5\n"
Apr 29 14:12:18.979: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7541.svc.cluster.local\tcanonical name = externalsvc.services-7541.svc.cluster.local.\nName:\texternalsvc.services-7541.svc.cluster.local\nAddress: 10.97.136.104\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7541, will wait for the garbage collector to delete the pods
Apr 29 14:12:19.040: INFO: Deleting ReplicationController externalsvc took: 7.390162ms
Apr 29 14:12:19.540: INFO: Terminating ReplicationController externalsvc pods took: 500.24858ms
Apr 29 14:12:33.757: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:33.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7541" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:28.187 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":290,"completed":222,"skipped":3571,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:33.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:38.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8855" for this suite.

• [SLOW TEST:5.151 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":290,"completed":223,"skipped":3579,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:38.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:46.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9291" for this suite.

• [SLOW TEST:7.321 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":290,"completed":224,"skipped":3579,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:46.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: starting the proxy server
Apr 29 14:12:46.383: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:46.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6507" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":290,"completed":225,"skipped":3583,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:46.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-42cbbcc7-b284-493a-907d-93a26973e909
STEP: Creating a pod to test consume secrets
Apr 29 14:12:46.609: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000" in namespace "projected-4237" to be "Succeeded or Failed"
Apr 29 14:12:46.620: INFO: Pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000": Phase="Pending", Reason="", readiness=false. Elapsed: 10.78789ms
Apr 29 14:12:48.624: INFO: Pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015159714s
Apr 29 14:12:50.628: INFO: Pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01923885s
Apr 29 14:12:52.632: INFO: Pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022956812s
STEP: Saw pod success
Apr 29 14:12:52.632: INFO: Pod "pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000" satisfied condition "Succeeded or Failed"
Apr 29 14:12:52.634: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000 container secret-volume-test: 
STEP: delete the pod
Apr 29 14:12:52.671: INFO: Waiting for pod pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000 to disappear
Apr 29 14:12:52.676: INFO: Pod pod-projected-secrets-bb73c905-3e03-49b8-ae40-9c9b55311000 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:52.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4237" for this suite.

• [SLOW TEST:6.196 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":290,"completed":226,"skipped":3599,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:52.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Apr 29 14:12:52.768: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8268" to be "Succeeded or Failed"
Apr 29 14:12:52.790: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.581264ms
Apr 29 14:12:54.794: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026141777s
Apr 29 14:12:56.798: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030388512s
Apr 29 14:12:58.803: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034688591s
STEP: Saw pod success
Apr 29 14:12:58.803: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Apr 29 14:12:58.806: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Apr 29 14:12:58.838: INFO: Waiting for pod pod-host-path-test to disappear
Apr 29 14:12:58.863: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:12:58.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8268" for this suite.

• [SLOW TEST:6.187 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":227,"skipped":3611,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:12:58.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Apr 29 14:12:58.993: INFO: Waiting up to 5m0s for pod "pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad" in namespace "emptydir-6047" to be "Succeeded or Failed"
Apr 29 14:12:59.012: INFO: Pod "pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad": Phase="Pending", Reason="", readiness=false. Elapsed: 18.113934ms
Apr 29 14:13:01.140: INFO: Pod "pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146445024s
Apr 29 14:13:03.143: INFO: Pod "pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150064274s
STEP: Saw pod success
Apr 29 14:13:03.144: INFO: Pod "pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad" satisfied condition "Succeeded or Failed"
Apr 29 14:13:03.146: INFO: Trying to get logs from node kali-worker2 pod pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad container test-container: 
STEP: delete the pod
Apr 29 14:13:03.182: INFO: Waiting for pod pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad to disappear
Apr 29 14:13:03.195: INFO: Pod pod-9a7b7ad6-a394-4039-847b-a2c2ef7415ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:13:03.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6047" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":228,"skipped":3627,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:13:03.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-mq5c
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 14:13:03.305: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mq5c" in namespace "subpath-2236" to be "Succeeded or Failed"
Apr 29 14:13:03.323: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.49479ms
Apr 29 14:13:05.328: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02309371s
Apr 29 14:13:07.494: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 4.18910032s
Apr 29 14:13:09.498: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 6.19336151s
Apr 29 14:13:11.502: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 8.19715515s
Apr 29 14:13:13.506: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 10.201347712s
Apr 29 14:13:15.510: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 12.205509224s
Apr 29 14:13:17.514: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 14.209031066s
Apr 29 14:13:19.518: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 16.213303651s
Apr 29 14:13:21.522: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 18.217447129s
Apr 29 14:13:23.527: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 20.221940158s
Apr 29 14:13:25.531: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Running", Reason="", readiness=true. Elapsed: 22.226409883s
Apr 29 14:13:27.535: INFO: Pod "pod-subpath-test-configmap-mq5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.230248844s
STEP: Saw pod success
Apr 29 14:13:27.535: INFO: Pod "pod-subpath-test-configmap-mq5c" satisfied condition "Succeeded or Failed"
Apr 29 14:13:27.538: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-mq5c container test-container-subpath-configmap-mq5c: 
STEP: delete the pod
Apr 29 14:13:27.611: INFO: Waiting for pod pod-subpath-test-configmap-mq5c to disappear
Apr 29 14:13:27.745: INFO: Pod pod-subpath-test-configmap-mq5c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mq5c
Apr 29 14:13:27.745: INFO: Deleting pod "pod-subpath-test-configmap-mq5c" in namespace "subpath-2236"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:13:27.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2236" for this suite.

• [SLOW TEST:24.557 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":290,"completed":229,"skipped":3637,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:13:27.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:13:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6209" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":290,"completed":230,"skipped":3668,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:13:28.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0429 14:14:08.791549       7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 14:14:08.791: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:14:08.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-350" for this suite.

• [SLOW TEST:40.776 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":290,"completed":231,"skipped":3709,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:14:08.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 in namespace container-probe-1517
Apr 29 14:14:12.872: INFO: Started pod liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 in namespace container-probe-1517
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 14:14:12.875: INFO: Initial restart count of pod liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is 0
Apr 29 14:14:25.293: INFO: Restart count of pod container-probe-1517/liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is now 1 (12.418375888s elapsed)
Apr 29 14:14:45.494: INFO: Restart count of pod container-probe-1517/liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is now 2 (32.619010587s elapsed)
Apr 29 14:15:05.617: INFO: Restart count of pod container-probe-1517/liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is now 3 (52.742281304s elapsed)
Apr 29 14:15:26.330: INFO: Restart count of pod container-probe-1517/liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is now 4 (1m13.455175957s elapsed)
Apr 29 14:16:38.624: INFO: Restart count of pod container-probe-1517/liveness-0de012c1-2862-4d13-a4d1-4b9e8d4fda32 is now 5 (2m25.748992179s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:16:38.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1517" for this suite.

• [SLOW TEST:149.893 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":290,"completed":232,"skipped":3751,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:16:38.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Apr 29 14:16:39.127: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Apr 29 14:16:39.161: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Apr 29 14:16:39.161: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Apr 29 14:16:39.185: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Apr 29 14:16:39.185: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Apr 29 14:16:39.267: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Apr 29 14:16:39.267: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Apr 29 14:16:46.775: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:16:46.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4461" for this suite.

• [SLOW TEST:8.163 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":290,"completed":233,"skipped":3773,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:16:46.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:16:47.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af" in namespace "projected-463" to be "Succeeded or Failed"
Apr 29 14:16:47.170: INFO: Pod "downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af": Phase="Pending", Reason="", readiness=false. Elapsed: 66.547642ms
Apr 29 14:16:49.173: INFO: Pod "downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069958269s
Apr 29 14:16:51.177: INFO: Pod "downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074207996s
STEP: Saw pod success
Apr 29 14:16:51.177: INFO: Pod "downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af" satisfied condition "Succeeded or Failed"
Apr 29 14:16:51.181: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af container client-container: 
STEP: delete the pod
Apr 29 14:16:51.364: INFO: Waiting for pod downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af to disappear
Apr 29 14:16:51.473: INFO: Pod downwardapi-volume-2ad0049c-8d3f-484a-b2c9-c364a173a7af no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:16:51.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-463" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":290,"completed":234,"skipped":3786,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:16:51.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1029.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1029.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1029.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1029.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 14:17:07.739: INFO: DNS probes using dns-1029/dns-test-a5313460-c549-4b1a-bd6a-13ba1e14f421 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:17:07.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1029" for this suite.

• [SLOW TEST:16.486 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":290,"completed":235,"skipped":3798,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:17:07.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:17:08.574: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Apr 29 14:17:08.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:08.643: INFO: Number of nodes with available pods: 0
Apr 29 14:17:08.643: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:09.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:09.652: INFO: Number of nodes with available pods: 0
Apr 29 14:17:09.652: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:10.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:10.652: INFO: Number of nodes with available pods: 0
Apr 29 14:17:10.652: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:11.756: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:11.786: INFO: Number of nodes with available pods: 0
Apr 29 14:17:11.786: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:12.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:12.651: INFO: Number of nodes with available pods: 0
Apr 29 14:17:12.651: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:13.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:13.652: INFO: Number of nodes with available pods: 2
Apr 29 14:17:13.652: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Apr 29 14:17:13.709: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:13.709: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:13.753: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:14.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:14.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:14.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:15.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:15.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:15.761: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:16.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:16.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:16.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:17.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:17.758: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:17.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:17.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:18.777: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:18.777: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:18.777: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:18.781: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:19.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:19.758: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:19.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:19.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:20.759: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:20.759: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:20.759: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:20.764: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:21.758: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:21.758: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:21.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:21.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:22.759: INFO: Wrong image for pod: daemon-set-gtfd5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:22.759: INFO: Pod daemon-set-gtfd5 is not available
Apr 29 14:17:22.759: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:22.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:23.758: INFO: Pod daemon-set-ht2rm is not available
Apr 29 14:17:23.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:23.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:24.758: INFO: Pod daemon-set-ht2rm is not available
Apr 29 14:17:24.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:24.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:25.758: INFO: Pod daemon-set-ht2rm is not available
Apr 29 14:17:25.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:25.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:26.759: INFO: Pod daemon-set-ht2rm is not available
Apr 29 14:17:26.759: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:26.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:27.757: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:27.760: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:28.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:28.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:29.783: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:29.783: INFO: Pod daemon-set-z5xwx is not available
Apr 29 14:17:29.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:30.764: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:30.764: INFO: Pod daemon-set-z5xwx is not available
Apr 29 14:17:30.769: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:31.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:31.759: INFO: Pod daemon-set-z5xwx is not available
Apr 29 14:17:31.763: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:32.758: INFO: Wrong image for pod: daemon-set-z5xwx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine.
Apr 29 14:17:32.758: INFO: Pod daemon-set-z5xwx is not available
Apr 29 14:17:32.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:33.773: INFO: Pod daemon-set-xrf6w is not available
Apr 29 14:17:33.792: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Apr 29 14:17:33.799: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:33.802: INFO: Number of nodes with available pods: 1
Apr 29 14:17:33.803: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:34.807: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:34.812: INFO: Number of nodes with available pods: 1
Apr 29 14:17:34.812: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:35.808: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:35.811: INFO: Number of nodes with available pods: 1
Apr 29 14:17:35.811: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:36.819: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:36.822: INFO: Number of nodes with available pods: 1
Apr 29 14:17:36.822: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:17:37.807: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:17:37.809: INFO: Number of nodes with available pods: 2
Apr 29 14:17:37.809: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4352, will wait for the garbage collector to delete the pods
Apr 29 14:17:37.882: INFO: Deleting DaemonSet.extensions daemon-set took: 5.626396ms
Apr 29 14:17:38.182: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.244679ms
Apr 29 14:17:42.286: INFO: Number of nodes with available pods: 0
Apr 29 14:17:42.286: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 14:17:42.288: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4352/daemonsets","resourceVersion":"80694"},"items":null}

Apr 29 14:17:42.291: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4352/pods","resourceVersion":"80694"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:17:42.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4352" for this suite.

• [SLOW TEST:34.323 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":290,"completed":236,"skipped":3807,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:17:42.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2279" for this suite.
STEP: Destroying namespace "nsdeletetest-2146" for this suite.
Apr 29 14:18:13.608: INFO: Namespace nsdeletetest-2146 was already deleted
STEP: Destroying namespace "nsdeletetest-1401" for this suite.

• [SLOW TEST:31.305 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":290,"completed":237,"skipped":3811,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:13.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-58301eed-3466-47ab-ae40-8c86705cff09
STEP: Creating a pod to test consume configMaps
Apr 29 14:18:13.678: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a" in namespace "projected-7914" to be "Succeeded or Failed"
Apr 29 14:18:13.741: INFO: Pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 63.065884ms
Apr 29 14:18:15.745: INFO: Pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067268206s
Apr 29 14:18:17.897: INFO: Pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a": Phase="Running", Reason="", readiness=true. Elapsed: 4.219119705s
Apr 29 14:18:19.902: INFO: Pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.22364238s
STEP: Saw pod success
Apr 29 14:18:19.902: INFO: Pod "pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a" satisfied condition "Succeeded or Failed"
Apr 29 14:18:19.905: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 14:18:19.982: INFO: Waiting for pod pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a to disappear
Apr 29 14:18:19.996: INFO: Pod pod-projected-configmaps-9d55af2c-3b81-4f62-bbf6-c2f5ec8b9a4a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:19.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7914" for this suite.

• [SLOW TEST:6.391 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":290,"completed":238,"skipped":3817,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:20.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8844.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8844.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8844.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8844.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 14:18:26.280: INFO: DNS probes using dns-8844/dns-test-f05fa1b8-2ece-4a73-8c03-2659645fd63b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:26.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8844" for this suite.

• [SLOW TEST:6.630 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":290,"completed":239,"skipped":3842,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:26.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:18:27.073: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7" in namespace "security-context-test-1998" to be "Succeeded or Failed"
Apr 29 14:18:27.088: INFO: Pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.034633ms
Apr 29 14:18:29.107: INFO: Pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03410289s
Apr 29 14:18:31.111: INFO: Pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038432878s
Apr 29 14:18:33.115: INFO: Pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04285281s
Apr 29 14:18:33.115: INFO: Pod "busybox-user-65534-6a058b09-1bbf-4ce0-96c0-d06308aafbe7" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:33.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1998" for this suite.

• [SLOW TEST:6.500 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":240,"skipped":3851,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:33.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-6122/configmap-test-189cecee-94c4-44db-a3f8-ec73b442cb1f
STEP: Creating a pod to test consume configMaps
Apr 29 14:18:33.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c" in namespace "configmap-6122" to be "Succeeded or Failed"
Apr 29 14:18:33.249: INFO: Pod "pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.484724ms
Apr 29 14:18:35.316: INFO: Pod "pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076595221s
Apr 29 14:18:37.321: INFO: Pod "pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081381354s
STEP: Saw pod success
Apr 29 14:18:37.321: INFO: Pod "pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c" satisfied condition "Succeeded or Failed"
Apr 29 14:18:37.324: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c container env-test: 
STEP: delete the pod
Apr 29 14:18:37.420: INFO: Waiting for pod pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c to disappear
Apr 29 14:18:37.519: INFO: Pod pod-configmaps-0f9e4ef9-6fb0-437b-aef2-7cf375353a3c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:37.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6122" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":290,"completed":241,"skipped":3916,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:37.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:37.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1697" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":290,"completed":242,"skipped":3937,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:37.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:18:37.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22" in namespace "downward-api-9360" to be "Succeeded or Failed"
Apr 29 14:18:37.740: INFO: Pod "downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22": Phase="Pending", Reason="", readiness=false. Elapsed: 40.337118ms
Apr 29 14:18:39.885: INFO: Pod "downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18527055s
Apr 29 14:18:41.890: INFO: Pod "downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189862187s
STEP: Saw pod success
Apr 29 14:18:41.890: INFO: Pod "downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22" satisfied condition "Succeeded or Failed"
Apr 29 14:18:41.894: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22 container client-container: 
STEP: delete the pod
Apr 29 14:18:41.963: INFO: Waiting for pod downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22 to disappear
Apr 29 14:18:41.966: INFO: Pod downwardapi-volume-d948c5e1-94ff-46ce-8adc-dc83a7a3da22 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:41.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9360" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":290,"completed":243,"skipped":3941,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:41.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:42.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-130" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":290,"completed":244,"skipped":3950,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:42.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: set up a multi version CRD
Apr 29 14:18:42.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:18:59.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4676" for this suite.

• [SLOW TEST:17.035 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":290,"completed":245,"skipped":3952,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:18:59.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Apr 29 14:18:59.920: INFO: Pod name wrapped-volume-race-faf8e399-7f69-45f4-931b-b74b24cf1a2f: Found 0 pods out of 5
Apr 29 14:19:04.926: INFO: Pod name wrapped-volume-race-faf8e399-7f69-45f4-931b-b74b24cf1a2f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-faf8e399-7f69-45f4-931b-b74b24cf1a2f in namespace emptydir-wrapper-4524, will wait for the garbage collector to delete the pods
Apr 29 14:19:21.012: INFO: Deleting ReplicationController wrapped-volume-race-faf8e399-7f69-45f4-931b-b74b24cf1a2f took: 6.550898ms
Apr 29 14:19:21.412: INFO: Terminating ReplicationController wrapped-volume-race-faf8e399-7f69-45f4-931b-b74b24cf1a2f pods took: 400.253488ms
STEP: Creating RC which spawns configmap-volume pods
Apr 29 14:19:33.786: INFO: Pod name wrapped-volume-race-382ff141-adba-4ffc-9e88-3cc29a6928c3: Found 0 pods out of 5
Apr 29 14:19:38.796: INFO: Pod name wrapped-volume-race-382ff141-adba-4ffc-9e88-3cc29a6928c3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-382ff141-adba-4ffc-9e88-3cc29a6928c3 in namespace emptydir-wrapper-4524, will wait for the garbage collector to delete the pods
Apr 29 14:19:50.965: INFO: Deleting ReplicationController wrapped-volume-race-382ff141-adba-4ffc-9e88-3cc29a6928c3 took: 32.964332ms
Apr 29 14:19:51.266: INFO: Terminating ReplicationController wrapped-volume-race-382ff141-adba-4ffc-9e88-3cc29a6928c3 pods took: 300.234842ms
STEP: Creating RC which spawns configmap-volume pods
Apr 29 14:20:05.125: INFO: Pod name wrapped-volume-race-78266129-50a2-424e-96e5-87eebfdedbd4: Found 0 pods out of 5
Apr 29 14:20:10.136: INFO: Pod name wrapped-volume-race-78266129-50a2-424e-96e5-87eebfdedbd4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-78266129-50a2-424e-96e5-87eebfdedbd4 in namespace emptydir-wrapper-4524, will wait for the garbage collector to delete the pods
Apr 29 14:20:30.320: INFO: Deleting ReplicationController wrapped-volume-race-78266129-50a2-424e-96e5-87eebfdedbd4 took: 68.2261ms
Apr 29 14:20:30.720: INFO: Terminating ReplicationController wrapped-volume-race-78266129-50a2-424e-96e5-87eebfdedbd4 pods took: 400.276954ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:20:45.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4524" for this suite.

• [SLOW TEST:106.410 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":290,"completed":246,"skipped":3959,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:20:45.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating Agnhost RC
Apr 29 14:20:45.766: INFO: namespace kubectl-9876
Apr 29 14:20:45.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9876'
Apr 29 14:20:49.194: INFO: stderr: ""
Apr 29 14:20:49.194: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Apr 29 14:20:50.198: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 14:20:50.198: INFO: Found 0 / 1
Apr 29 14:20:51.419: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 14:20:51.419: INFO: Found 0 / 1
Apr 29 14:20:52.203: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 14:20:52.203: INFO: Found 0 / 1
Apr 29 14:20:53.256: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 14:20:53.256: INFO: Found 1 / 1
Apr 29 14:20:53.256: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Apr 29 14:20:53.294: INFO: Selector matched 1 pods for map[app:agnhost]
Apr 29 14:20:53.294: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 29 14:20:53.294: INFO: wait on agnhost-master startup in kubectl-9876 
Apr 29 14:20:53.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-w2xmq agnhost-master --namespace=kubectl-9876'
Apr 29 14:20:53.463: INFO: stderr: ""
Apr 29 14:20:53.463: INFO: stdout: "Paused\n"
STEP: exposing RC
Apr 29 14:20:53.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9876'
Apr 29 14:20:53.668: INFO: stderr: ""
Apr 29 14:20:53.668: INFO: stdout: "service/rm2 exposed\n"
Apr 29 14:20:53.704: INFO: Service rm2 in namespace kubectl-9876 found.
STEP: exposing service
Apr 29 14:20:55.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9876'
Apr 29 14:20:55.831: INFO: stderr: ""
Apr 29 14:20:55.831: INFO: stdout: "service/rm3 exposed\n"
Apr 29 14:20:55.840: INFO: Service rm3 in namespace kubectl-9876 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:20:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9876" for this suite.

• [SLOW TEST:12.248 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":290,"completed":247,"skipped":3978,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:20:57.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Apr 29 14:20:57.933: INFO: Waiting up to 5m0s for pod "var-expansion-1c11088d-1923-4417-9db9-e14c26da6243" in namespace "var-expansion-185" to be "Succeeded or Failed"
Apr 29 14:20:57.961: INFO: Pod "var-expansion-1c11088d-1923-4417-9db9-e14c26da6243": Phase="Pending", Reason="", readiness=false. Elapsed: 27.734766ms
Apr 29 14:20:59.965: INFO: Pod "var-expansion-1c11088d-1923-4417-9db9-e14c26da6243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031548734s
Apr 29 14:21:01.969: INFO: Pod "var-expansion-1c11088d-1923-4417-9db9-e14c26da6243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035949051s
STEP: Saw pod success
Apr 29 14:21:01.969: INFO: Pod "var-expansion-1c11088d-1923-4417-9db9-e14c26da6243" satisfied condition "Succeeded or Failed"
Apr 29 14:21:01.972: INFO: Trying to get logs from node kali-worker2 pod var-expansion-1c11088d-1923-4417-9db9-e14c26da6243 container dapi-container: 
STEP: delete the pod
Apr 29 14:21:02.004: INFO: Waiting for pod var-expansion-1c11088d-1923-4417-9db9-e14c26da6243 to disappear
Apr 29 14:21:02.009: INFO: Pod var-expansion-1c11088d-1923-4417-9db9-e14c26da6243 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:02.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-185" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":290,"completed":248,"skipped":3987,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:02.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name s-test-opt-del-bf846c26-75a2-4643-8c80-e2146da40494
STEP: Creating secret with name s-test-opt-upd-de19d35e-8be0-4237-8c37-728fd0a09bde
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bf846c26-75a2-4643-8c80-e2146da40494
STEP: Updating secret s-test-opt-upd-de19d35e-8be0-4237-8c37-728fd0a09bde
STEP: Creating secret with name s-test-opt-create-569ecc92-58f3-4e75-bab1-73c69d2dd55c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:12.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4477" for this suite.

• [SLOW TEST:10.269 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":249,"skipped":3993,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:12.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:21:12.407: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d" in namespace "security-context-test-319" to be "Succeeded or Failed"
Apr 29 14:21:12.417: INFO: Pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.819952ms
Apr 29 14:21:14.497: INFO: Pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089912901s
Apr 29 14:21:16.502: INFO: Pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094653159s
Apr 29 14:21:18.506: INFO: Pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098945409s
Apr 29 14:21:18.506: INFO: Pod "busybox-readonly-false-7de3e9d6-ba40-444d-9b28-192bfeb9633d" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:18.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-319" for this suite.

• [SLOW TEST:6.229 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":290,"completed":250,"skipped":3995,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:18.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: validating cluster-info
Apr 29 14:21:18.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
Apr 29 14:21:18.669: INFO: stderr: ""
Apr 29 14:21:18.669: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:18.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7159" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":290,"completed":251,"skipped":4011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:18.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:21:19.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Apr 29 14:21:20.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3715 create -f -'
Apr 29 14:21:26.945: INFO: stderr: ""
Apr 29 14:21:26.945: INFO: stdout: "e2e-test-crd-publish-openapi-4513-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Apr 29 14:21:26.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3715 delete e2e-test-crd-publish-openapi-4513-crds test-cr'
Apr 29 14:21:27.076: INFO: stderr: ""
Apr 29 14:21:27.076: INFO: stdout: "e2e-test-crd-publish-openapi-4513-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Apr 29 14:21:27.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3715 apply -f -'
Apr 29 14:21:27.358: INFO: stderr: ""
Apr 29 14:21:27.358: INFO: stdout: "e2e-test-crd-publish-openapi-4513-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Apr 29 14:21:27.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3715 delete e2e-test-crd-publish-openapi-4513-crds test-cr'
Apr 29 14:21:27.481: INFO: stderr: ""
Apr 29 14:21:27.481: INFO: stdout: "e2e-test-crd-publish-openapi-4513-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Apr 29 14:21:27.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4513-crds'
Apr 29 14:21:27.837: INFO: stderr: ""
Apr 29 14:21:27.837: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4513-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:30.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3715" for this suite.

• [SLOW TEST:12.112 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":290,"completed":252,"skipped":4082,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:30.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-49340511-443a-4000-9d63-e6453ffde07d
STEP: Creating a pod to test consume secrets
Apr 29 14:21:31.064: INFO: Waiting up to 5m0s for pod "pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263" in namespace "secrets-8040" to be "Succeeded or Failed"
Apr 29 14:21:31.110: INFO: Pod "pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263": Phase="Pending", Reason="", readiness=false. Elapsed: 46.458045ms
Apr 29 14:21:33.115: INFO: Pod "pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051026734s
Apr 29 14:21:35.119: INFO: Pod "pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055478085s
STEP: Saw pod success
Apr 29 14:21:35.119: INFO: Pod "pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263" satisfied condition "Succeeded or Failed"
Apr 29 14:21:35.122: INFO: Trying to get logs from node kali-worker pod pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263 container secret-volume-test: 
STEP: delete the pod
Apr 29 14:21:35.168: INFO: Waiting for pod pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263 to disappear
Apr 29 14:21:35.175: INFO: Pod pod-secrets-7e4b55eb-bc40-4beb-acbc-0834b8606263 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:35.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8040" for this suite.
STEP: Destroying namespace "secret-namespace-8657" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":290,"completed":253,"skipped":4084,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:35.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating the pod
Apr 29 14:21:39.792: INFO: Successfully updated pod "annotationupdateedaebc74-f537-43b9-9c13-380f866f7675"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:21:41.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7857" for this suite.

• [SLOW TEST:6.646 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":290,"completed":254,"skipped":4108,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:21:41.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 14:21:50.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.018: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.021: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.024: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.033: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.036: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.039: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.042: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:50.077: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:21:55.083: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.087: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.091: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.094: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.105: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.109: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.112: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.115: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:21:55.122: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:22:00.082: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.086: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.089: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.092: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.103: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.106: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.109: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.113: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:00.119: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:22:05.100: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.103: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.111: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.113: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.122: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.124: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.127: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.130: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:05.159: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:22:10.083: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.086: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.090: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.093: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.104: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.107: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.110: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.113: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:10.119: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:22:15.082: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.086: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.090: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.093: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.104: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.107: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.110: INFO: Unable to read jessie_udp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.113: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local from pod dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa: the server could not find the requested resource (get pods dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa)
Apr 29 14:22:15.119: INFO: Lookups using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local wheezy_udp@dns-test-service-2.dns-781.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-781.svc.cluster.local jessie_udp@dns-test-service-2.dns-781.svc.cluster.local jessie_tcp@dns-test-service-2.dns-781.svc.cluster.local]

Apr 29 14:22:20.112: INFO: DNS probes using dns-781/dns-test-53f20d85-9eff-48d6-89b2-b10a6fb41afa succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:22:20.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-781" for this suite.

• [SLOW TEST:39.007 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":290,"completed":255,"skipped":4109,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:22:20.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Apr 29 14:22:20.954: INFO: Waiting up to 5m0s for pod "pod-34eac21b-b209-4da3-a78c-33fab4287398" in namespace "emptydir-4284" to be "Succeeded or Failed"
Apr 29 14:22:20.973: INFO: Pod "pod-34eac21b-b209-4da3-a78c-33fab4287398": Phase="Pending", Reason="", readiness=false. Elapsed: 19.091687ms
Apr 29 14:22:22.988: INFO: Pod "pod-34eac21b-b209-4da3-a78c-33fab4287398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033816472s
Apr 29 14:22:24.992: INFO: Pod "pod-34eac21b-b209-4da3-a78c-33fab4287398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038210209s
STEP: Saw pod success
Apr 29 14:22:24.992: INFO: Pod "pod-34eac21b-b209-4da3-a78c-33fab4287398" satisfied condition "Succeeded or Failed"
Apr 29 14:22:24.996: INFO: Trying to get logs from node kali-worker pod pod-34eac21b-b209-4da3-a78c-33fab4287398 container test-container: 
STEP: delete the pod
Apr 29 14:22:25.029: INFO: Waiting for pod pod-34eac21b-b209-4da3-a78c-33fab4287398 to disappear
Apr 29 14:22:25.034: INFO: Pod pod-34eac21b-b209-4da3-a78c-33fab4287398 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:22:25.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4284" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":256,"skipped":4126,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:22:25.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-cb3a5eee-a1be-4469-baec-09e0b599fd63
STEP: Creating a pod to test consume configMaps
Apr 29 14:22:25.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc" in namespace "configmap-9234" to be "Succeeded or Failed"
Apr 29 14:22:25.138: INFO: Pod "pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.576255ms
Apr 29 14:22:27.142: INFO: Pod "pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025913192s
Apr 29 14:22:29.146: INFO: Pod "pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029886815s
STEP: Saw pod success
Apr 29 14:22:29.146: INFO: Pod "pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc" satisfied condition "Succeeded or Failed"
Apr 29 14:22:29.149: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc container configmap-volume-test: 
STEP: delete the pod
Apr 29 14:22:29.190: INFO: Waiting for pod pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc to disappear
Apr 29 14:22:29.201: INFO: Pod pod-configmaps-640ed25d-537d-4728-8cb8-31692f4035bc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:22:29.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9234" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":257,"skipped":4229,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:22:29.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:02.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2746" for this suite.

• [SLOW TEST:33.073 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":290,"completed":258,"skipped":4271,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:02.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:23:02.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a" in namespace "downward-api-8339" to be "Succeeded or Failed"
Apr 29 14:23:02.399: INFO: Pod "downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.41097ms
Apr 29 14:23:04.402: INFO: Pod "downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026078891s
Apr 29 14:23:06.407: INFO: Pod "downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030634255s
STEP: Saw pod success
Apr 29 14:23:06.407: INFO: Pod "downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a" satisfied condition "Succeeded or Failed"
Apr 29 14:23:06.411: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a container client-container: 
STEP: delete the pod
Apr 29 14:23:06.478: INFO: Waiting for pod downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a to disappear
Apr 29 14:23:06.490: INFO: Pod downwardapi-volume-e84cbcce-8b4f-44cb-a02c-1357d8ba079a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:06.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8339" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":259,"skipped":4302,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:06.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-7474eda8-a3dc-4cdf-a673-5fd4f657cf42
STEP: Creating a pod to test consume configMaps
Apr 29 14:23:06.654: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751" in namespace "projected-3767" to be "Succeeded or Failed"
Apr 29 14:23:06.674: INFO: Pod "pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751": Phase="Pending", Reason="", readiness=false. Elapsed: 19.383377ms
Apr 29 14:23:08.816: INFO: Pod "pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161253126s
Apr 29 14:23:10.844: INFO: Pod "pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.190010627s
STEP: Saw pod success
Apr 29 14:23:10.844: INFO: Pod "pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751" satisfied condition "Succeeded or Failed"
Apr 29 14:23:10.847: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 14:23:10.879: INFO: Waiting for pod pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751 to disappear
Apr 29 14:23:10.885: INFO: Pod pod-projected-configmaps-75a663f7-13d1-4ec2-833a-6e4d84e43751 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:10.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3767" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":290,"completed":260,"skipped":4362,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:10.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-274c9feb-83fc-4777-bf7e-f30344567f9e
STEP: Creating a pod to test consume configMaps
Apr 29 14:23:11.285: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad" in namespace "projected-1429" to be "Succeeded or Failed"
Apr 29 14:23:11.319: INFO: Pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 33.844273ms
Apr 29 14:23:13.337: INFO: Pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052376072s
Apr 29 14:23:15.341: INFO: Pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05639147s
Apr 29 14:23:17.346: INFO: Pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060663416s
STEP: Saw pod success
Apr 29 14:23:17.346: INFO: Pod "pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad" satisfied condition "Succeeded or Failed"
Apr 29 14:23:17.349: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 14:23:17.383: INFO: Waiting for pod pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad to disappear
Apr 29 14:23:17.402: INFO: Pod pod-projected-configmaps-22a425d9-344b-4f6e-98ce-686233e5e7ad no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:17.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1429" for this suite.

• [SLOW TEST:6.518 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":261,"skipped":4379,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:17.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Apr 29 14:23:17.579: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:17.600: INFO: Number of nodes with available pods: 0
Apr 29 14:23:17.600: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:18.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:18.608: INFO: Number of nodes with available pods: 0
Apr 29 14:23:18.608: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:19.605: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:19.608: INFO: Number of nodes with available pods: 0
Apr 29 14:23:19.608: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:21.140: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:21.143: INFO: Number of nodes with available pods: 0
Apr 29 14:23:21.144: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:21.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:21.608: INFO: Number of nodes with available pods: 0
Apr 29 14:23:21.608: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:24.160: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:24.165: INFO: Number of nodes with available pods: 1
Apr 29 14:23:24.165: INFO: Node kali-worker is running more than one daemon pod
Apr 29 14:23:24.604: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:24.607: INFO: Number of nodes with available pods: 2
Apr 29 14:23:24.607: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Apr 29 14:23:24.648: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 14:23:24.659: INFO: Number of nodes with available pods: 2
Apr 29 14:23:24.659: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5386, will wait for the garbage collector to delete the pods
Apr 29 14:23:25.780: INFO: Deleting DaemonSet.extensions daemon-set took: 6.417676ms
Apr 29 14:23:26.181: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.340116ms
Apr 29 14:23:33.803: INFO: Number of nodes with available pods: 0
Apr 29 14:23:33.803: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 14:23:33.806: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5386/daemonsets","resourceVersion":"83329"},"items":null}

Apr 29 14:23:33.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5386/pods","resourceVersion":"83329"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:33.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5386" for this suite.

• [SLOW TEST:16.414 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":290,"completed":262,"skipped":4392,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:33.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Apr 29 14:23:33.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9760'
Apr 29 14:23:33.998: INFO: stderr: ""
Apr 29 14:23:33.998: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Apr 29 14:23:39.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9760 -o json'
Apr 29 14:23:39.152: INFO: stderr: ""
Apr 29 14:23:39.152: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-04-29T14:23:33Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-04-29T14:23:33Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.1.201\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-04-29T14:23:37Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9760\",\n        \"resourceVersion\": \"83353\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9760/pods/e2e-test-httpd-pod\",\n        \"uid\": \"5e773b15-a022-4fc0-bdb8-ba2248cec532\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-wnbtm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-wnbtm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-wnbtm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-04-29T14:23:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-04-29T14:23:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-04-29T14:23:37Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-04-29T14:23:33Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://0b59ae6e08e994f5db9eacca9ae5c0048daabb86a728d2aa8619365dd027f2bb\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-04-29T14:23:36Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.18\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.201\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.201\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-04-29T14:23:34Z\"\n    }\n}\n"
STEP: replace the image in the pod
Apr 29 14:23:39.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9760'
Apr 29 14:23:39.562: INFO: stderr: ""
Apr 29 14:23:39.562: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564
Apr 29 14:23:39.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9760'
Apr 29 14:23:53.401: INFO: stderr: ""
Apr 29 14:23:53.401: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:23:53.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9760" for this suite.

• [SLOW TEST:19.596 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":290,"completed":263,"skipped":4395,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:23:53.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a replication controller
Apr 29 14:23:53.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1495'
Apr 29 14:23:53.774: INFO: stderr: ""
Apr 29 14:23:53.774: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 14:23:53.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1495'
Apr 29 14:23:53.930: INFO: stderr: ""
Apr 29 14:23:53.930: INFO: stdout: "update-demo-nautilus-27x5c update-demo-nautilus-82ccq "
Apr 29 14:23:53.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27x5c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1495'
Apr 29 14:23:54.027: INFO: stderr: ""
Apr 29 14:23:54.028: INFO: stdout: ""
Apr 29 14:23:54.028: INFO: update-demo-nautilus-27x5c is created but not running
Apr 29 14:23:59.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1495'
Apr 29 14:23:59.142: INFO: stderr: ""
Apr 29 14:23:59.142: INFO: stdout: "update-demo-nautilus-27x5c update-demo-nautilus-82ccq "
Apr 29 14:23:59.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27x5c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1495'
Apr 29 14:23:59.259: INFO: stderr: ""
Apr 29 14:23:59.259: INFO: stdout: "true"
Apr 29 14:23:59.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27x5c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1495'
Apr 29 14:23:59.346: INFO: stderr: ""
Apr 29 14:23:59.346: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 14:23:59.346: INFO: validating pod update-demo-nautilus-27x5c
Apr 29 14:23:59.350: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 14:23:59.350: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 14:23:59.350: INFO: update-demo-nautilus-27x5c is verified up and running
Apr 29 14:23:59.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82ccq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1495'
Apr 29 14:23:59.438: INFO: stderr: ""
Apr 29 14:23:59.438: INFO: stdout: "true"
Apr 29 14:23:59.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-82ccq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1495'
Apr 29 14:23:59.534: INFO: stderr: ""
Apr 29 14:23:59.534: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 14:23:59.534: INFO: validating pod update-demo-nautilus-82ccq
Apr 29 14:23:59.539: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 14:23:59.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 14:23:59.539: INFO: update-demo-nautilus-82ccq is verified up and running
STEP: using delete to clean up resources
Apr 29 14:23:59.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1495'
Apr 29 14:23:59.639: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 14:23:59.639: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Apr 29 14:23:59.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1495'
Apr 29 14:23:59.741: INFO: stderr: "No resources found in kubectl-1495 namespace.\n"
Apr 29 14:23:59.741: INFO: stdout: ""
Apr 29 14:23:59.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1495 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 14:23:59.844: INFO: stderr: ""
Apr 29 14:23:59.844: INFO: stdout: "update-demo-nautilus-27x5c\nupdate-demo-nautilus-82ccq\n"
Apr 29 14:24:00.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1495'
Apr 29 14:24:00.450: INFO: stderr: "No resources found in kubectl-1495 namespace.\n"
Apr 29 14:24:00.450: INFO: stdout: ""
Apr 29 14:24:00.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1495 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 14:24:00.610: INFO: stderr: ""
Apr 29 14:24:00.610: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:24:00.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1495" for this suite.

• [SLOW TEST:7.198 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":290,"completed":264,"skipped":4413,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:24:00.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod liveness-7fc5543b-0f92-4dcf-8f3e-c94cf52f1ab4 in namespace container-probe-5330
Apr 29 14:24:05.029: INFO: Started pod liveness-7fc5543b-0f92-4dcf-8f3e-c94cf52f1ab4 in namespace container-probe-5330
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 14:24:05.032: INFO: Initial restart count of pod liveness-7fc5543b-0f92-4dcf-8f3e-c94cf52f1ab4 is 0
Apr 29 14:24:29.087: INFO: Restart count of pod container-probe-5330/liveness-7fc5543b-0f92-4dcf-8f3e-c94cf52f1ab4 is now 1 (24.055250159s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:24:29.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5330" for this suite.

• [SLOW TEST:28.540 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":290,"completed":265,"skipped":4416,"failed":0}
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:24:29.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Apr 29 14:24:39.487: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:39.487: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:39.533885       7 log.go:172] (0xc0019b8420) (0xc001b25180) Create stream
I0429 14:24:39.533916       7 log.go:172] (0xc0019b8420) (0xc001b25180) Stream added, broadcasting: 1
I0429 14:24:39.536013       7 log.go:172] (0xc0019b8420) Reply frame received for 1
I0429 14:24:39.536089       7 log.go:172] (0xc0019b8420) (0xc002c3a820) Create stream
I0429 14:24:39.536113       7 log.go:172] (0xc0019b8420) (0xc002c3a820) Stream added, broadcasting: 3
I0429 14:24:39.537022       7 log.go:172] (0xc0019b8420) Reply frame received for 3
I0429 14:24:39.537057       7 log.go:172] (0xc0019b8420) (0xc001b25220) Create stream
I0429 14:24:39.537076       7 log.go:172] (0xc0019b8420) (0xc001b25220) Stream added, broadcasting: 5
I0429 14:24:39.538237       7 log.go:172] (0xc0019b8420) Reply frame received for 5
I0429 14:24:39.628995       7 log.go:172] (0xc0019b8420) Data frame received for 5
I0429 14:24:39.629026       7 log.go:172] (0xc001b25220) (5) Data frame handling
I0429 14:24:39.629060       7 log.go:172] (0xc0019b8420) Data frame received for 3
I0429 14:24:39.629090       7 log.go:172] (0xc002c3a820) (3) Data frame handling
I0429 14:24:39.629255       7 log.go:172] (0xc002c3a820) (3) Data frame sent
I0429 14:24:39.629276       7 log.go:172] (0xc0019b8420) Data frame received for 3
I0429 14:24:39.629288       7 log.go:172] (0xc002c3a820) (3) Data frame handling
I0429 14:24:39.631246       7 log.go:172] (0xc0019b8420) Data frame received for 1
I0429 14:24:39.631270       7 log.go:172] (0xc001b25180) (1) Data frame handling
I0429 14:24:39.631286       7 log.go:172] (0xc001b25180) (1) Data frame sent
I0429 14:24:39.631302       7 log.go:172] (0xc0019b8420) (0xc001b25180) Stream removed, broadcasting: 1
I0429 14:24:39.631319       7 log.go:172] (0xc0019b8420) Go away received
I0429 14:24:39.631444       7 log.go:172] (0xc0019b8420) (0xc001b25180) Stream removed, broadcasting: 1
I0429 14:24:39.631466       7 log.go:172] (0xc0019b8420) (0xc002c3a820) Stream removed, broadcasting: 3
I0429 14:24:39.631483       7 log.go:172] (0xc0019b8420) (0xc001b25220) Stream removed, broadcasting: 5
Apr 29 14:24:39.631: INFO: Exec stderr: ""
Apr 29 14:24:39.631: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:39.631: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:39.669781       7 log.go:172] (0xc002e16630) (0xc0015fd220) Create stream
I0429 14:24:39.669823       7 log.go:172] (0xc002e16630) (0xc0015fd220) Stream added, broadcasting: 1
I0429 14:24:39.671714       7 log.go:172] (0xc002e16630) Reply frame received for 1
I0429 14:24:39.671763       7 log.go:172] (0xc002e16630) (0xc001b25400) Create stream
I0429 14:24:39.671781       7 log.go:172] (0xc002e16630) (0xc001b25400) Stream added, broadcasting: 3
I0429 14:24:39.672514       7 log.go:172] (0xc002e16630) Reply frame received for 3
I0429 14:24:39.672552       7 log.go:172] (0xc002e16630) (0xc001b254a0) Create stream
I0429 14:24:39.672568       7 log.go:172] (0xc002e16630) (0xc001b254a0) Stream added, broadcasting: 5
I0429 14:24:39.673760       7 log.go:172] (0xc002e16630) Reply frame received for 5
I0429 14:24:39.746358       7 log.go:172] (0xc002e16630) Data frame received for 5
I0429 14:24:39.746457       7 log.go:172] (0xc001b254a0) (5) Data frame handling
I0429 14:24:39.746519       7 log.go:172] (0xc002e16630) Data frame received for 3
I0429 14:24:39.746565       7 log.go:172] (0xc001b25400) (3) Data frame handling
I0429 14:24:39.746625       7 log.go:172] (0xc001b25400) (3) Data frame sent
I0429 14:24:39.746675       7 log.go:172] (0xc002e16630) Data frame received for 3
I0429 14:24:39.746721       7 log.go:172] (0xc001b25400) (3) Data frame handling
I0429 14:24:39.751826       7 log.go:172] (0xc002e16630) Data frame received for 1
I0429 14:24:39.751850       7 log.go:172] (0xc0015fd220) (1) Data frame handling
I0429 14:24:39.751869       7 log.go:172] (0xc0015fd220) (1) Data frame sent
I0429 14:24:39.751888       7 log.go:172] (0xc002e16630) (0xc0015fd220) Stream removed, broadcasting: 1
I0429 14:24:39.751905       7 log.go:172] (0xc002e16630) Go away received
I0429 14:24:39.752017       7 log.go:172] (0xc002e16630) (0xc0015fd220) Stream removed, broadcasting: 1
I0429 14:24:39.752040       7 log.go:172] (0xc002e16630) (0xc001b25400) Stream removed, broadcasting: 3
I0429 14:24:39.752050       7 log.go:172] (0xc002e16630) (0xc001b254a0) Stream removed, broadcasting: 5
Apr 29 14:24:39.752: INFO: Exec stderr: ""
Apr 29 14:24:39.752: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:39.752: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:39.777652       7 log.go:172] (0xc003c789a0) (0xc002478320) Create stream
I0429 14:24:39.777678       7 log.go:172] (0xc003c789a0) (0xc002478320) Stream added, broadcasting: 1
I0429 14:24:39.779643       7 log.go:172] (0xc003c789a0) Reply frame received for 1
I0429 14:24:39.779662       7 log.go:172] (0xc003c789a0) (0xc00233a780) Create stream
I0429 14:24:39.779667       7 log.go:172] (0xc003c789a0) (0xc00233a780) Stream added, broadcasting: 3
I0429 14:24:39.780446       7 log.go:172] (0xc003c789a0) Reply frame received for 3
I0429 14:24:39.780488       7 log.go:172] (0xc003c789a0) (0xc001b25540) Create stream
I0429 14:24:39.780502       7 log.go:172] (0xc003c789a0) (0xc001b25540) Stream added, broadcasting: 5
I0429 14:24:39.781836       7 log.go:172] (0xc003c789a0) Reply frame received for 5
I0429 14:24:39.837905       7 log.go:172] (0xc003c789a0) Data frame received for 3
I0429 14:24:39.837936       7 log.go:172] (0xc00233a780) (3) Data frame handling
I0429 14:24:39.837950       7 log.go:172] (0xc00233a780) (3) Data frame sent
I0429 14:24:39.837974       7 log.go:172] (0xc003c789a0) Data frame received for 5
I0429 14:24:39.838008       7 log.go:172] (0xc001b25540) (5) Data frame handling
I0429 14:24:39.838055       7 log.go:172] (0xc003c789a0) Data frame received for 3
I0429 14:24:39.838084       7 log.go:172] (0xc00233a780) (3) Data frame handling
I0429 14:24:39.839539       7 log.go:172] (0xc003c789a0) Data frame received for 1
I0429 14:24:39.839568       7 log.go:172] (0xc002478320) (1) Data frame handling
I0429 14:24:39.839584       7 log.go:172] (0xc002478320) (1) Data frame sent
I0429 14:24:39.839602       7 log.go:172] (0xc003c789a0) (0xc002478320) Stream removed, broadcasting: 1
I0429 14:24:39.839638       7 log.go:172] (0xc003c789a0) Go away received
I0429 14:24:39.839722       7 log.go:172] (0xc003c789a0) (0xc002478320) Stream removed, broadcasting: 1
I0429 14:24:39.839734       7 log.go:172] (0xc003c789a0) (0xc00233a780) Stream removed, broadcasting: 3
I0429 14:24:39.839739       7 log.go:172] (0xc003c789a0) (0xc001b25540) Stream removed, broadcasting: 5
Apr 29 14:24:39.839: INFO: Exec stderr: ""
Apr 29 14:24:39.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:39.839: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:39.869940       7 log.go:172] (0xc0019b8a50) (0xc001b259a0) Create stream
I0429 14:24:39.869970       7 log.go:172] (0xc0019b8a50) (0xc001b259a0) Stream added, broadcasting: 1
I0429 14:24:39.872410       7 log.go:172] (0xc0019b8a50) Reply frame received for 1
I0429 14:24:39.872473       7 log.go:172] (0xc0019b8a50) (0xc0024783c0) Create stream
I0429 14:24:39.872492       7 log.go:172] (0xc0019b8a50) (0xc0024783c0) Stream added, broadcasting: 3
I0429 14:24:39.873668       7 log.go:172] (0xc0019b8a50) Reply frame received for 3
I0429 14:24:39.873713       7 log.go:172] (0xc0019b8a50) (0xc00233a820) Create stream
I0429 14:24:39.873731       7 log.go:172] (0xc0019b8a50) (0xc00233a820) Stream added, broadcasting: 5
I0429 14:24:39.874567       7 log.go:172] (0xc0019b8a50) Reply frame received for 5
I0429 14:24:39.933694       7 log.go:172] (0xc0019b8a50) Data frame received for 3
I0429 14:24:39.933755       7 log.go:172] (0xc0024783c0) (3) Data frame handling
I0429 14:24:39.933783       7 log.go:172] (0xc0024783c0) (3) Data frame sent
I0429 14:24:39.933805       7 log.go:172] (0xc0019b8a50) Data frame received for 3
I0429 14:24:39.933825       7 log.go:172] (0xc0024783c0) (3) Data frame handling
I0429 14:24:39.933910       7 log.go:172] (0xc0019b8a50) Data frame received for 5
I0429 14:24:39.933953       7 log.go:172] (0xc00233a820) (5) Data frame handling
I0429 14:24:39.935539       7 log.go:172] (0xc0019b8a50) Data frame received for 1
I0429 14:24:39.935561       7 log.go:172] (0xc001b259a0) (1) Data frame handling
I0429 14:24:39.935580       7 log.go:172] (0xc001b259a0) (1) Data frame sent
I0429 14:24:39.935596       7 log.go:172] (0xc0019b8a50) (0xc001b259a0) Stream removed, broadcasting: 1
I0429 14:24:39.935666       7 log.go:172] (0xc0019b8a50) Go away received
I0429 14:24:39.935699       7 log.go:172] (0xc0019b8a50) (0xc001b259a0) Stream removed, broadcasting: 1
I0429 14:24:39.935717       7 log.go:172] (0xc0019b8a50) (0xc0024783c0) Stream removed, broadcasting: 3
I0429 14:24:39.935727       7 log.go:172] (0xc0019b8a50) (0xc00233a820) Stream removed, broadcasting: 5
Apr 29 14:24:39.935: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Apr 29 14:24:39.935: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:39.935: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:39.972131       7 log.go:172] (0xc002e16d10) (0xc0015fd400) Create stream
I0429 14:24:39.972158       7 log.go:172] (0xc002e16d10) (0xc0015fd400) Stream added, broadcasting: 1
I0429 14:24:39.974632       7 log.go:172] (0xc002e16d10) Reply frame received for 1
I0429 14:24:39.974673       7 log.go:172] (0xc002e16d10) (0xc001b25a40) Create stream
I0429 14:24:39.974688       7 log.go:172] (0xc002e16d10) (0xc001b25a40) Stream added, broadcasting: 3
I0429 14:24:39.975767       7 log.go:172] (0xc002e16d10) Reply frame received for 3
I0429 14:24:39.975789       7 log.go:172] (0xc002e16d10) (0xc00233a8c0) Create stream
I0429 14:24:39.975802       7 log.go:172] (0xc002e16d10) (0xc00233a8c0) Stream added, broadcasting: 5
I0429 14:24:39.976729       7 log.go:172] (0xc002e16d10) Reply frame received for 5
I0429 14:24:40.040751       7 log.go:172] (0xc002e16d10) Data frame received for 5
I0429 14:24:40.040794       7 log.go:172] (0xc00233a8c0) (5) Data frame handling
I0429 14:24:40.040822       7 log.go:172] (0xc002e16d10) Data frame received for 3
I0429 14:24:40.040840       7 log.go:172] (0xc001b25a40) (3) Data frame handling
I0429 14:24:40.040857       7 log.go:172] (0xc001b25a40) (3) Data frame sent
I0429 14:24:40.040877       7 log.go:172] (0xc002e16d10) Data frame received for 3
I0429 14:24:40.040894       7 log.go:172] (0xc001b25a40) (3) Data frame handling
I0429 14:24:40.042889       7 log.go:172] (0xc002e16d10) Data frame received for 1
I0429 14:24:40.042945       7 log.go:172] (0xc0015fd400) (1) Data frame handling
I0429 14:24:40.042969       7 log.go:172] (0xc0015fd400) (1) Data frame sent
I0429 14:24:40.042981       7 log.go:172] (0xc002e16d10) (0xc0015fd400) Stream removed, broadcasting: 1
I0429 14:24:40.042994       7 log.go:172] (0xc002e16d10) Go away received
I0429 14:24:40.043096       7 log.go:172] (0xc002e16d10) (0xc0015fd400) Stream removed, broadcasting: 1
I0429 14:24:40.043133       7 log.go:172] (0xc002e16d10) (0xc001b25a40) Stream removed, broadcasting: 3
I0429 14:24:40.043145       7 log.go:172] (0xc002e16d10) (0xc00233a8c0) Stream removed, broadcasting: 5
Apr 29 14:24:40.043: INFO: Exec stderr: ""
Apr 29 14:24:40.043: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:40.043: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:40.079015       7 log.go:172] (0xc0028de420) (0xc00233adc0) Create stream
I0429 14:24:40.079047       7 log.go:172] (0xc0028de420) (0xc00233adc0) Stream added, broadcasting: 1
I0429 14:24:40.081610       7 log.go:172] (0xc0028de420) Reply frame received for 1
I0429 14:24:40.081656       7 log.go:172] (0xc0028de420) (0xc002c3a960) Create stream
I0429 14:24:40.081672       7 log.go:172] (0xc0028de420) (0xc002c3a960) Stream added, broadcasting: 3
I0429 14:24:40.082617       7 log.go:172] (0xc0028de420) Reply frame received for 3
I0429 14:24:40.082661       7 log.go:172] (0xc0028de420) (0xc001b25b80) Create stream
I0429 14:24:40.082678       7 log.go:172] (0xc0028de420) (0xc001b25b80) Stream added, broadcasting: 5
I0429 14:24:40.083605       7 log.go:172] (0xc0028de420) Reply frame received for 5
I0429 14:24:40.165490       7 log.go:172] (0xc0028de420) Data frame received for 5
I0429 14:24:40.165539       7 log.go:172] (0xc001b25b80) (5) Data frame handling
I0429 14:24:40.165591       7 log.go:172] (0xc0028de420) Data frame received for 3
I0429 14:24:40.165614       7 log.go:172] (0xc002c3a960) (3) Data frame handling
I0429 14:24:40.165648       7 log.go:172] (0xc002c3a960) (3) Data frame sent
I0429 14:24:40.165750       7 log.go:172] (0xc0028de420) Data frame received for 3
I0429 14:24:40.165779       7 log.go:172] (0xc002c3a960) (3) Data frame handling
I0429 14:24:40.166959       7 log.go:172] (0xc0028de420) Data frame received for 1
I0429 14:24:40.167005       7 log.go:172] (0xc00233adc0) (1) Data frame handling
I0429 14:24:40.167066       7 log.go:172] (0xc00233adc0) (1) Data frame sent
I0429 14:24:40.167096       7 log.go:172] (0xc0028de420) (0xc00233adc0) Stream removed, broadcasting: 1
I0429 14:24:40.167162       7 log.go:172] (0xc0028de420) Go away received
I0429 14:24:40.167217       7 log.go:172] (0xc0028de420) (0xc00233adc0) Stream removed, broadcasting: 1
I0429 14:24:40.167246       7 log.go:172] (0xc0028de420) (0xc002c3a960) Stream removed, broadcasting: 3
I0429 14:24:40.167270       7 log.go:172] (0xc0028de420) (0xc001b25b80) Stream removed, broadcasting: 5
Apr 29 14:24:40.167: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Apr 29 14:24:40.167: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:40.167: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:40.203352       7 log.go:172] (0xc0028deb00) (0xc00233b040) Create stream
I0429 14:24:40.203379       7 log.go:172] (0xc0028deb00) (0xc00233b040) Stream added, broadcasting: 1
I0429 14:24:40.205716       7 log.go:172] (0xc0028deb00) Reply frame received for 1
I0429 14:24:40.205799       7 log.go:172] (0xc0028deb00) (0xc0015fd4a0) Create stream
I0429 14:24:40.205819       7 log.go:172] (0xc0028deb00) (0xc0015fd4a0) Stream added, broadcasting: 3
I0429 14:24:40.206879       7 log.go:172] (0xc0028deb00) Reply frame received for 3
I0429 14:24:40.206936       7 log.go:172] (0xc0028deb00) (0xc001b25f40) Create stream
I0429 14:24:40.206963       7 log.go:172] (0xc0028deb00) (0xc001b25f40) Stream added, broadcasting: 5
I0429 14:24:40.207845       7 log.go:172] (0xc0028deb00) Reply frame received for 5
I0429 14:24:40.272153       7 log.go:172] (0xc0028deb00) Data frame received for 5
I0429 14:24:40.272214       7 log.go:172] (0xc001b25f40) (5) Data frame handling
I0429 14:24:40.272254       7 log.go:172] (0xc0028deb00) Data frame received for 3
I0429 14:24:40.272276       7 log.go:172] (0xc0015fd4a0) (3) Data frame handling
I0429 14:24:40.272307       7 log.go:172] (0xc0015fd4a0) (3) Data frame sent
I0429 14:24:40.272327       7 log.go:172] (0xc0028deb00) Data frame received for 3
I0429 14:24:40.272346       7 log.go:172] (0xc0015fd4a0) (3) Data frame handling
I0429 14:24:40.274160       7 log.go:172] (0xc0028deb00) Data frame received for 1
I0429 14:24:40.274184       7 log.go:172] (0xc00233b040) (1) Data frame handling
I0429 14:24:40.274200       7 log.go:172] (0xc00233b040) (1) Data frame sent
I0429 14:24:40.274220       7 log.go:172] (0xc0028deb00) (0xc00233b040) Stream removed, broadcasting: 1
I0429 14:24:40.274238       7 log.go:172] (0xc0028deb00) Go away received
I0429 14:24:40.274400       7 log.go:172] (0xc0028deb00) (0xc00233b040) Stream removed, broadcasting: 1
I0429 14:24:40.274427       7 log.go:172] (0xc0028deb00) (0xc0015fd4a0) Stream removed, broadcasting: 3
I0429 14:24:40.274461       7 log.go:172] (0xc0028deb00) (0xc001b25f40) Stream removed, broadcasting: 5
Apr 29 14:24:40.274: INFO: Exec stderr: ""
Apr 29 14:24:40.274: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:40.274: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:40.302322       7 log.go:172] (0xc002e70bb0) (0xc002c3afa0) Create stream
I0429 14:24:40.302343       7 log.go:172] (0xc002e70bb0) (0xc002c3afa0) Stream added, broadcasting: 1
I0429 14:24:40.304535       7 log.go:172] (0xc002e70bb0) Reply frame received for 1
I0429 14:24:40.304690       7 log.go:172] (0xc002e70bb0) (0xc00233b0e0) Create stream
I0429 14:24:40.304724       7 log.go:172] (0xc002e70bb0) (0xc00233b0e0) Stream added, broadcasting: 3
I0429 14:24:40.305832       7 log.go:172] (0xc002e70bb0) Reply frame received for 3
I0429 14:24:40.305866       7 log.go:172] (0xc002e70bb0) (0xc0013fa140) Create stream
I0429 14:24:40.305887       7 log.go:172] (0xc002e70bb0) (0xc0013fa140) Stream added, broadcasting: 5
I0429 14:24:40.306826       7 log.go:172] (0xc002e70bb0) Reply frame received for 5
I0429 14:24:40.357729       7 log.go:172] (0xc002e70bb0) Data frame received for 5
I0429 14:24:40.357752       7 log.go:172] (0xc0013fa140) (5) Data frame handling
I0429 14:24:40.357774       7 log.go:172] (0xc002e70bb0) Data frame received for 3
I0429 14:24:40.357791       7 log.go:172] (0xc00233b0e0) (3) Data frame handling
I0429 14:24:40.357804       7 log.go:172] (0xc00233b0e0) (3) Data frame sent
I0429 14:24:40.357817       7 log.go:172] (0xc002e70bb0) Data frame received for 3
I0429 14:24:40.357825       7 log.go:172] (0xc00233b0e0) (3) Data frame handling
I0429 14:24:40.359050       7 log.go:172] (0xc002e70bb0) Data frame received for 1
I0429 14:24:40.359061       7 log.go:172] (0xc002c3afa0) (1) Data frame handling
I0429 14:24:40.359067       7 log.go:172] (0xc002c3afa0) (1) Data frame sent
I0429 14:24:40.359074       7 log.go:172] (0xc002e70bb0) (0xc002c3afa0) Stream removed, broadcasting: 1
I0429 14:24:40.359098       7 log.go:172] (0xc002e70bb0) Go away received
I0429 14:24:40.359145       7 log.go:172] (0xc002e70bb0) (0xc002c3afa0) Stream removed, broadcasting: 1
I0429 14:24:40.359157       7 log.go:172] (0xc002e70bb0) (0xc00233b0e0) Stream removed, broadcasting: 3
I0429 14:24:40.359163       7 log.go:172] (0xc002e70bb0) (0xc0013fa140) Stream removed, broadcasting: 5
Apr 29 14:24:40.359: INFO: Exec stderr: ""
Apr 29 14:24:40.359: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:40.359: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:40.386392       7 log.go:172] (0xc002e711e0) (0xc002c3b2c0) Create stream
I0429 14:24:40.386418       7 log.go:172] (0xc002e711e0) (0xc002c3b2c0) Stream added, broadcasting: 1
I0429 14:24:40.388696       7 log.go:172] (0xc002e711e0) Reply frame received for 1
I0429 14:24:40.388726       7 log.go:172] (0xc002e711e0) (0xc002c3b360) Create stream
I0429 14:24:40.388737       7 log.go:172] (0xc002e711e0) (0xc002c3b360) Stream added, broadcasting: 3
I0429 14:24:40.389964       7 log.go:172] (0xc002e711e0) Reply frame received for 3
I0429 14:24:40.390005       7 log.go:172] (0xc002e711e0) (0xc002c3b400) Create stream
I0429 14:24:40.390018       7 log.go:172] (0xc002e711e0) (0xc002c3b400) Stream added, broadcasting: 5
I0429 14:24:40.390858       7 log.go:172] (0xc002e711e0) Reply frame received for 5
I0429 14:24:40.461935       7 log.go:172] (0xc002e711e0) Data frame received for 3
I0429 14:24:40.461995       7 log.go:172] (0xc002c3b360) (3) Data frame handling
I0429 14:24:40.462016       7 log.go:172] (0xc002c3b360) (3) Data frame sent
I0429 14:24:40.462031       7 log.go:172] (0xc002e711e0) Data frame received for 3
I0429 14:24:40.462045       7 log.go:172] (0xc002c3b360) (3) Data frame handling
I0429 14:24:40.462075       7 log.go:172] (0xc002e711e0) Data frame received for 5
I0429 14:24:40.462102       7 log.go:172] (0xc002c3b400) (5) Data frame handling
I0429 14:24:40.463722       7 log.go:172] (0xc002e711e0) Data frame received for 1
I0429 14:24:40.463760       7 log.go:172] (0xc002c3b2c0) (1) Data frame handling
I0429 14:24:40.463780       7 log.go:172] (0xc002c3b2c0) (1) Data frame sent
I0429 14:24:40.463802       7 log.go:172] (0xc002e711e0) (0xc002c3b2c0) Stream removed, broadcasting: 1
I0429 14:24:40.463826       7 log.go:172] (0xc002e711e0) Go away received
I0429 14:24:40.463967       7 log.go:172] (0xc002e711e0) (0xc002c3b2c0) Stream removed, broadcasting: 1
I0429 14:24:40.463988       7 log.go:172] (0xc002e711e0) (0xc002c3b360) Stream removed, broadcasting: 3
I0429 14:24:40.464005       7 log.go:172] (0xc002e711e0) (0xc002c3b400) Stream removed, broadcasting: 5
Apr 29 14:24:40.464: INFO: Exec stderr: ""
Apr 29 14:24:40.464: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-108 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 14:24:40.464: INFO: >>> kubeConfig: /root/.kube/config
I0429 14:24:40.500042       7 log.go:172] (0xc002e71810) (0xc002c3b680) Create stream
I0429 14:24:40.500077       7 log.go:172] (0xc002e71810) (0xc002c3b680) Stream added, broadcasting: 1
I0429 14:24:40.502586       7 log.go:172] (0xc002e71810) Reply frame received for 1
I0429 14:24:40.502628       7 log.go:172] (0xc002e71810) (0xc002c3b860) Create stream
I0429 14:24:40.502642       7 log.go:172] (0xc002e71810) (0xc002c3b860) Stream added, broadcasting: 3
I0429 14:24:40.503667       7 log.go:172] (0xc002e71810) Reply frame received for 3
I0429 14:24:40.503707       7 log.go:172] (0xc002e71810) (0xc00233b180) Create stream
I0429 14:24:40.503720       7 log.go:172] (0xc002e71810) (0xc00233b180) Stream added, broadcasting: 5
I0429 14:24:40.504794       7 log.go:172] (0xc002e71810) Reply frame received for 5
I0429 14:24:40.561289       7 log.go:172] (0xc002e71810) Data frame received for 3
I0429 14:24:40.561335       7 log.go:172] (0xc002c3b860) (3) Data frame handling
I0429 14:24:40.561361       7 log.go:172] (0xc002c3b860) (3) Data frame sent
I0429 14:24:40.561380       7 log.go:172] (0xc002e71810) Data frame received for 3
I0429 14:24:40.561391       7 log.go:172] (0xc002c3b860) (3) Data frame handling
I0429 14:24:40.561448       7 log.go:172] (0xc002e71810) Data frame received for 5
I0429 14:24:40.561466       7 log.go:172] (0xc00233b180) (5) Data frame handling
I0429 14:24:40.563051       7 log.go:172] (0xc002e71810) Data frame received for 1
I0429 14:24:40.563065       7 log.go:172] (0xc002c3b680) (1) Data frame handling
I0429 14:24:40.563071       7 log.go:172] (0xc002c3b680) (1) Data frame sent
I0429 14:24:40.563080       7 log.go:172] (0xc002e71810) (0xc002c3b680) Stream removed, broadcasting: 1
I0429 14:24:40.563093       7 log.go:172] (0xc002e71810) Go away received
I0429 14:24:40.563222       7 log.go:172] (0xc002e71810) (0xc002c3b680) Stream removed, broadcasting: 1
I0429 14:24:40.563248       7 log.go:172] (0xc002e71810) (0xc002c3b860) Stream removed, broadcasting: 3
I0429 14:24:40.563262       7 log.go:172] (0xc002e71810) (0xc00233b180) Stream removed, broadcasting: 5
Apr 29 14:24:40.563: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:24:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-108" for this suite.

• [SLOW TEST:11.451 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":266,"skipped":4422,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:24:40.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:24:51.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4794" for this suite.

• [SLOW TEST:11.168 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":290,"completed":267,"skipped":4430,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:24:51.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-0a249e84-42f8-424b-a64e-0f0482491a7c
STEP: Creating a pod to test consume configMaps
Apr 29 14:24:51.843: INFO: Waiting up to 5m0s for pod "pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8" in namespace "configmap-6254" to be "Succeeded or Failed"
Apr 29 14:24:51.857: INFO: Pod "pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.502779ms
Apr 29 14:24:53.870: INFO: Pod "pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027503274s
Apr 29 14:24:55.876: INFO: Pod "pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033375997s
STEP: Saw pod success
Apr 29 14:24:55.876: INFO: Pod "pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8" satisfied condition "Succeeded or Failed"
Apr 29 14:24:55.879: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8 container configmap-volume-test: 
STEP: delete the pod
Apr 29 14:24:55.942: INFO: Waiting for pod pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8 to disappear
Apr 29 14:24:55.971: INFO: Pod pod-configmaps-6eb433d5-2f9f-4d4c-8e5f-e9c92d0d9ed8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:24:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6254" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":268,"skipped":4433,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:24:55.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:24:56.734: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:24:58.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767096, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767096, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767096, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767096, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:25:01.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:13.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5443" for this suite.
STEP: Destroying namespace "webhook-5443-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.306 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":290,"completed":269,"skipped":4452,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:13.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:25:13.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754" in namespace "downward-api-3780" to be "Succeeded or Failed"
Apr 29 14:25:13.403: INFO: Pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754": Phase="Pending", Reason="", readiness=false. Elapsed: 47.348359ms
Apr 29 14:25:15.407: INFO: Pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051648289s
Apr 29 14:25:17.411: INFO: Pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055411655s
Apr 29 14:25:19.415: INFO: Pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060207832s
STEP: Saw pod success
Apr 29 14:25:19.416: INFO: Pod "downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754" satisfied condition "Succeeded or Failed"
Apr 29 14:25:19.418: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754 container client-container: 
STEP: delete the pod
Apr 29 14:25:19.470: INFO: Waiting for pod downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754 to disappear
Apr 29 14:25:19.482: INFO: Pod downwardapi-volume-e6fef13b-9dc7-45ad-83e6-f48228cf6754 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:19.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3780" for this suite.

• [SLOW TEST:6.255 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":290,"completed":270,"skipped":4482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:19.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Apr 29 14:25:19.784: INFO: Waiting up to 5m0s for pod "pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0" in namespace "emptydir-2298" to be "Succeeded or Failed"
Apr 29 14:25:19.820: INFO: Pod "pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.452626ms
Apr 29 14:25:21.825: INFO: Pod "pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041373862s
Apr 29 14:25:23.882: INFO: Pod "pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098705178s
STEP: Saw pod success
Apr 29 14:25:23.882: INFO: Pod "pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0" satisfied condition "Succeeded or Failed"
Apr 29 14:25:23.886: INFO: Trying to get logs from node kali-worker2 pod pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0 container test-container: 
STEP: delete the pod
Apr 29 14:25:24.070: INFO: Waiting for pod pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0 to disappear
Apr 29 14:25:24.082: INFO: Pod pod-673353d2-ec02-4fa9-b9ed-6e2c94fac5b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:24.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2298" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":271,"skipped":4532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:24.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:25:24.942: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:25:26.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767124, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767124, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767125, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767124, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:25:30.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:30.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5933" for this suite.
STEP: Destroying namespace "webhook-5933-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.102 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":290,"completed":272,"skipped":4568,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:30.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1617f676-0723-46ac-a165-7694b711cb50
STEP: Creating a pod to test consume secrets
Apr 29 14:25:30.367: INFO: Waiting up to 5m0s for pod "pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1" in namespace "secrets-9336" to be "Succeeded or Failed"
Apr 29 14:25:30.547: INFO: Pod "pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1": Phase="Pending", Reason="", readiness=false. Elapsed: 179.66906ms
Apr 29 14:25:32.576: INFO: Pod "pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209542766s
Apr 29 14:25:34.581: INFO: Pod "pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214387564s
STEP: Saw pod success
Apr 29 14:25:34.581: INFO: Pod "pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1" satisfied condition "Succeeded or Failed"
Apr 29 14:25:34.585: INFO: Trying to get logs from node kali-worker pod pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1 container secret-volume-test: 
STEP: delete the pod
Apr 29 14:25:34.623: INFO: Waiting for pod pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1 to disappear
Apr 29 14:25:34.672: INFO: Pod pod-secrets-29f23160-0e77-42b8-94b7-245cb9fa51d1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:34.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9336" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":273,"skipped":4571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:34.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:34.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1987" for this suite.
STEP: Destroying namespace "nspatchtest-3215a0f0-2451-40d4-bd42-c1435043d5c9-7827" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":290,"completed":274,"skipped":4607,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:34.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Apr 29 14:25:34.951: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering the sample API server.
Apr 29 14:25:35.403: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Apr 29 14:25:37.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767135, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767135, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767135, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767135, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:25:40.641: INFO: Waited 629.971287ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:41.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7556" for this suite.

• [SLOW TEST:6.356 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":290,"completed":275,"skipped":4632,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:41.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name cm-test-opt-del-ff3be748-6bb9-47ef-a01c-be0ff856a50c
STEP: Creating configMap with name cm-test-opt-upd-b9ccf45c-1767-4787-8856-3cca9c2c6821
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ff3be748-6bb9-47ef-a01c-be0ff856a50c
STEP: Updating configmap cm-test-opt-upd-b9ccf45c-1767-4787-8856-3cca9c2c6821
STEP: Creating configMap with name cm-test-opt-create-70f1cb88-775d-4125-a8de-875abc2a1c5d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8343" for this suite.

• [SLOW TEST:8.541 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":276,"skipped":4636,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:49.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Apr 29 14:25:49.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85" in namespace "downward-api-4787" to be "Succeeded or Failed"
Apr 29 14:25:49.846: INFO: Pod "downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85": Phase="Pending", Reason="", readiness=false. Elapsed: 16.298627ms
Apr 29 14:25:51.850: INFO: Pod "downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020719479s
Apr 29 14:25:53.855: INFO: Pod "downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025248322s
STEP: Saw pod success
Apr 29 14:25:53.855: INFO: Pod "downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85" satisfied condition "Succeeded or Failed"
Apr 29 14:25:53.858: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85 container client-container: 
STEP: delete the pod
Apr 29 14:25:54.070: INFO: Waiting for pod downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85 to disappear
Apr 29 14:25:54.084: INFO: Pod downwardapi-volume-9e8fc9b5-c416-4f2c-9406-ec450bc7ef85 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:25:54.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4787" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":277,"skipped":4648,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:25:54.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:25:54.578: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:25:56.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767154, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767154, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767154, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767154, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:25:59.709: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:25:59.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:26:01.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6324" for this suite.
STEP: Destroying namespace "webhook-6324-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.009 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":290,"completed":278,"skipped":4674,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:26:01.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:26:01.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Apr 29 14:26:03.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767161, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767161, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767162, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767161, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:26:07.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:26:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6437" for this suite.
STEP: Destroying namespace "webhook-6437-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.187 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":290,"completed":279,"skipped":4678,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:26:07.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod busybox-1f02586f-20e3-4d6d-8fef-1ed6d7b1188a in namespace container-probe-6553
Apr 29 14:26:11.462: INFO: Started pod busybox-1f02586f-20e3-4d6d-8fef-1ed6d7b1188a in namespace container-probe-6553
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 14:26:11.464: INFO: Initial restart count of pod busybox-1f02586f-20e3-4d6d-8fef-1ed6d7b1188a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:30:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6553" for this suite.

• [SLOW TEST:244.325 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":290,"completed":280,"skipped":4695,"failed":0}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:30:11.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Apr 29 14:30:12.310: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Apr 29 14:30:14.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767412, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767412, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767412, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767412, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Apr 29 14:30:17.460: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:30:17.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:30:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6325" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:7.170 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":290,"completed":281,"skipped":4695,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:30:18.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:32:18.922: INFO: Deleting pod "var-expansion-7815c328-9b14-4dbe-864a-bffd1915943e" in namespace "var-expansion-8581"
Apr 29 14:32:18.928: INFO: Wait up to 5m0s for pod "var-expansion-7815c328-9b14-4dbe-864a-bffd1915943e" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:32:20.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8581" for this suite.

• [SLOW TEST:122.152 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":290,"completed":282,"skipped":4704,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:32:20.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating service endpoint-test2 in namespace services-3841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3841 to expose endpoints map[]
Apr 29 14:32:21.161: INFO: Get endpoints failed (40.352095ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Apr 29 14:32:22.165: INFO: successfully validated that service endpoint-test2 in namespace services-3841 exposes endpoints map[] (1.044611301s elapsed)
STEP: Creating pod pod1 in namespace services-3841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3841 to expose endpoints map[pod1:[80]]
Apr 29 14:32:26.251: INFO: successfully validated that service endpoint-test2 in namespace services-3841 exposes endpoints map[pod1:[80]] (4.077270927s elapsed)
STEP: Creating pod pod2 in namespace services-3841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3841 to expose endpoints map[pod1:[80] pod2:[80]]
Apr 29 14:32:30.446: INFO: successfully validated that service endpoint-test2 in namespace services-3841 exposes endpoints map[pod1:[80] pod2:[80]] (4.191444931s elapsed)
STEP: Deleting pod pod1 in namespace services-3841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3841 to expose endpoints map[pod2:[80]]
Apr 29 14:32:31.590: INFO: successfully validated that service endpoint-test2 in namespace services-3841 exposes endpoints map[pod2:[80]] (1.139585092s elapsed)
STEP: Deleting pod pod2 in namespace services-3841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3841 to expose endpoints map[]
Apr 29 14:32:32.821: INFO: successfully validated that service endpoint-test2 in namespace services-3841 exposes endpoints map[] (1.007517842s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:32:32.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3841" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:11.914 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":290,"completed":283,"skipped":4709,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:32:32.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:32:37.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4908" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":290,"completed":284,"skipped":4714,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:32:37.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:32:41.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5125" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":290,"completed":285,"skipped":4748,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:32:41.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:32:41.558: INFO: Pod name rollover-pod: Found 0 pods out of 1
Apr 29 14:32:46.562: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 29 14:32:46.562: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Apr 29 14:32:48.566: INFO: Creating deployment "test-rollover-deployment"
Apr 29 14:32:48.599: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Apr 29 14:32:50.634: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Apr 29 14:32:50.640: INFO: Ensure that both replica sets have 1 created replica
Apr 29 14:32:50.646: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Apr 29 14:32:50.654: INFO: Updating deployment test-rollover-deployment
Apr 29 14:32:50.654: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Apr 29 14:32:52.665: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Apr 29 14:32:52.671: INFO: Make sure deployment "test-rollover-deployment" is complete
Apr 29 14:32:52.678: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:32:52.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767570, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:32:54.687: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:32:54.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767574, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:32:56.687: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:32:56.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767574, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:32:58.687: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:32:58.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767574, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:33:00.686: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:33:00.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767574, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:33:02.684: INFO: all replica sets need to contain the pod-template-hash label
Apr 29 14:33:02.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767574, loc:(*time.Location)(0x7c45300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723767568, loc:(*time.Location)(0x7c45300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 14:33:04.891: INFO: 
Apr 29 14:33:04.891: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
Apr 29 14:33:04.896: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2400 /apis/apps/v1/namespaces/deployment-2400/deployments/test-rollover-deployment 19dcc4f7-3baa-4aee-829c-f48d522753ea 86073 2 2020-04-29 14:32:48 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-04-29 14:32:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003896858  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-29 14:32:48 +0000 UTC,LastTransitionTime:2020-04-29 14:32:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-04-29 14:33:04 +0000 UTC,LastTransitionTime:2020-04-29 14:32:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Apr 29 14:33:04.898: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879  deployment-2400 /apis/apps/v1/namespaces/deployment-2400/replicasets/test-rollover-deployment-7c4fd9c879 0af5ad0c-a514-40e5-928b-cf9f115495fc 86060 2 2020-04-29 14:32:50 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 19dcc4f7-3baa-4aee-829c-f48d522753ea 0xc003897057 0xc003897058}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19dcc4f7-3baa-4aee-829c-f48d522753ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038970f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:33:04.898: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Apr 29 14:33:04.898: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2400 /apis/apps/v1/namespaces/deployment-2400/replicasets/test-rollover-controller 0b312faa-85bf-4b7d-b60d-9c5fc70d5925 86072 2 2020-04-29 14:32:41 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 19dcc4f7-3baa-4aee-829c-f48d522753ea 0xc003896e27 0xc003896e28}] []  [{e2e.test Update apps/v1 2020-04-29 14:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19dcc4f7-3baa-4aee-829c-f48d522753ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003896ee8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:33:04.898: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-2400 /apis/apps/v1/namespaces/deployment-2400/replicasets/test-rollover-deployment-5686c4cfd5 2538a034-c23d-4ace-9398-2cfa856c4c1d 86008 2 2020-04-29 14:32:48 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 19dcc4f7-3baa-4aee-829c-f48d522753ea 0xc003896f57 0xc003896f58}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:32:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19dcc4f7-3baa-4aee-829c-f48d522753ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003896fe8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:33:04.901: INFO: Pod "test-rollover-deployment-7c4fd9c879-bdp7w" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-bdp7w test-rollover-deployment-7c4fd9c879- deployment-2400 /api/v1/namespaces/deployment-2400/pods/test-rollover-deployment-7c4fd9c879-bdp7w 3fd8b6ac-8831-4e99-947c-a341b8dc4230 86029 0 2020-04-29 14:32:50 +0000 UTC   map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 0af5ad0c-a514-40e5-928b-cf9f115495fc 0xc002e4cfe7 0xc002e4cfe8}] []  [{kube-controller-manager Update v1 2020-04-29 14:32:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0af5ad0c-a514-40e5-928b-cf9f115495fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:32:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.204\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q57xs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q57xs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q57xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:32:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:32:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:32:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:32:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.204,StartTime:2020-04-29 14:32:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:32:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://3b9c0f7b5cdf1cf6c64a255137efe6014a248f6456b9db382c286aafdc9fa227,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:33:04.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2400" for this suite.

• [SLOW TEST:23.442 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":290,"completed":286,"skipped":4750,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:33:04.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-preemption
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80
Apr 29 14:33:05.242: INFO: Waiting up to 1m0s for all nodes to be ready
Apr 29 14:34:05.267: INFO: Waiting for terminating namespaces to be deleted...
[BeforeEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:34:05.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-preemption-path
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467
STEP: Finding an available node
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
Apr 29 14:34:09.408: INFO: found a healthy node: kali-worker
[It] runs ReplicaSets to verify preemption running path [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:34:23.895: INFO: pods created so far: [1 1 1]
Apr 29 14:34:23.895: INFO: length of pods created so far: 3
Apr 29 14:34:37.908: INFO: pods created so far: [2 2 1]
[AfterEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:34:44.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-path-1806" for this suite.
[AfterEach] PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:34:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4206" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74

• [SLOW TEST:101.930 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428
    runs ReplicaSets to verify preemption running path [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":290,"completed":287,"skipped":4763,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:34:46.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Apr 29 14:34:58.731: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:34:58.755: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:00.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:00.760: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:02.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:02.760: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:04.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:04.759: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:06.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:06.759: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:08.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:08.760: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:10.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:10.760: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:12.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:12.760: INFO: Pod pod-with-prestop-exec-hook still exists
Apr 29 14:35:14.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Apr 29 14:35:14.759: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:35:14.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-725" for this suite.

• [SLOW TEST:27.952 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":290,"completed":288,"skipped":4769,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:35:14.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with configMap that has name projected-configmap-test-upd-9efe0dca-e38f-473e-adc5-5da3c1dd00dc
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-9efe0dca-e38f-473e-adc5-5da3c1dd00dc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:36:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7922" for this suite.

• [SLOW TEST:92.833 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":290,"completed":289,"skipped":4786,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Apr 29 14:36:47.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Apr 29 14:36:47.733: INFO: Creating deployment "webserver-deployment"
Apr 29 14:36:47.752: INFO: Waiting for observed generation 1
Apr 29 14:36:49.979: INFO: Waiting for all required pods to come up
Apr 29 14:36:49.984: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Apr 29 14:37:00.181: INFO: Waiting for deployment "webserver-deployment" to complete
Apr 29 14:37:00.187: INFO: Updating deployment "webserver-deployment" with a non-existent image
Apr 29 14:37:00.193: INFO: Updating deployment webserver-deployment
Apr 29 14:37:00.193: INFO: Waiting for observed generation 2
Apr 29 14:37:02.200: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Apr 29 14:37:02.203: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Apr 29 14:37:02.206: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Apr 29 14:37:02.214: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Apr 29 14:37:02.214: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Apr 29 14:37:02.216: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Apr 29 14:37:02.221: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Apr 29 14:37:02.221: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Apr 29 14:37:02.228: INFO: Updating deployment webserver-deployment
Apr 29 14:37:02.228: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Apr 29 14:37:02.792: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Apr 29 14:37:02.797: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71
Apr 29 14:37:06.206: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-9834 /apis/apps/v1/namespaces/deployment-9834/deployments/webserver-deployment f39a4cf9-75ec-4e6b-ba2d-1815c788fe0c 87290 3 2020-04-29 14:36:47 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003367308  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-29 14:37:02 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-04-29 14:37:03 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Apr 29 14:37:06.387: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-9834 /apis/apps/v1/namespaces/deployment-9834/replicasets/webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 87284 3 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f39a4cf9-75ec-4e6b-ba2d-1815c788fe0c 0xc0033677e7 0xc0033677e8}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f39a4cf9-75ec-4e6b-ba2d-1815c788fe0c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003367868  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:37:06.387: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Apr 29 14:37:06.387: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-9834 /apis/apps/v1/namespaces/deployment-9834/replicasets/webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 87281 3 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f39a4cf9-75ec-4e6b-ba2d-1815c788fe0c 0xc0033678c7 0xc0033678c8}] []  [{kube-controller-manager Update apps/v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f39a4cf9-75ec-4e6b-ba2d-1815c788fe0c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003367938  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Apr 29 14:37:06.588: INFO: Pod "webserver-deployment-6676bcd6d4-2mkj8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2mkj8 webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-2mkj8 6d60bdf3-03ee-4e8c-b002-0358bf6752f9 87278 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc002e4da87 0xc002e4da88}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.588: INFO: Pod "webserver-deployment-6676bcd6d4-4b9rt" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4b9rt webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-4b9rt 3ea3e9f3-8771-438f-bfe8-20a138bc9e2d 87312 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc002e4dc30 0xc002e4dc31}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.589: INFO: Pod "webserver-deployment-6676bcd6d4-4whnr" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4whnr webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-4whnr 0898f637-0770-4d97-8389-3b09ec7f6b84 87269 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc002e4ddd0 0xc002e4ddd1}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.589: INFO: Pod "webserver-deployment-6676bcd6d4-6lrj9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6lrj9 webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-6lrj9 2e8c1a8d-8717-4c8c-95e5-b1b7906689d0 87309 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc002e4df10 0xc002e4df11}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.589: INFO: Pod "webserver-deployment-6676bcd6d4-7qt4l" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7qt4l webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-7qt4l 4ae66bbb-b49b-4f53-9644-a8d464ab015d 87264 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc0038960b0 0xc0038960b1}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.589: INFO: Pod "webserver-deployment-6676bcd6d4-9gdmb" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9gdmb webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-9gdmb 21f9433a-9074-43d2-a876-8f2171383a44 87303 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896250 0xc003896251}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.590: INFO: Pod "webserver-deployment-6676bcd6d4-gdfkq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gdfkq webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-gdfkq c1b380f1-6cfe-43f1-9e94-74f093d4b412 87199 0 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc0038963f0 0xc0038963f1}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.590: INFO: Pod "webserver-deployment-6676bcd6d4-hjmtz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hjmtz webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-hjmtz 431e81b6-f50d-4edd-9984-99038cb5a7d1 87307 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896590 0xc003896591}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.590: INFO: Pod "webserver-deployment-6676bcd6d4-lvs9l" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lvs9l webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-lvs9l b08a3cf9-cc9d-46ca-81bd-bbca7e2b805e 87197 0 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896730 0xc003896731}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.590: INFO: Pod "webserver-deployment-6676bcd6d4-qvk7q" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qvk7q webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-qvk7q b8437994-4ed5-40cf-be15-8f558bcd3017 87178 0 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc0038968d0 0xc0038968d1}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.590: INFO: Pod "webserver-deployment-6676bcd6d4-td98q" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-td98q webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-td98q 49ec5ded-3e09-42de-90a0-601d5ac1eb75 87325 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896af0 0xc003896af1}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-6676bcd6d4-vtl5z" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vtl5z webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-vtl5z 30411bd0-0366-43a9-affb-0b1f0351000f 87173 0 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896d80 0xc003896d81}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-6676bcd6d4-xl9df" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xl9df webserver-deployment-6676bcd6d4- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-6676bcd6d4-xl9df 0b64aad2-04b2-4c99-a91e-65cfac11d452 87181 0 2020-04-29 14:37:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9f07e3a9-0108-49d8-b198-14966abcdf36 0xc003896f80 0xc003896f81}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f07e3a9-0108-49d8-b198-14966abcdf36\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-84855cf797-24cl2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-24cl2 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-24cl2 c30e7e59-dd80-4922-84fe-8eb64b031ab0 87136 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897140 0xc003897141}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.223\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.223,StartTime:2020-04-29 14:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ab3d6c05aa6ae1b465ae6750c8535899e883fb23596c8acf67a756847b64dfc9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-84855cf797-29666" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-29666 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-29666 d106fa96-b237-40d3-8909-64d9bfdddef2 87282 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897377 0xc003897378}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-84855cf797-2xnpg" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2xnpg webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-2xnpg f2c18034-3c1c-4bf1-b303-5517ec32d8ac 87118 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897577 0xc003897578}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.214,StartTime:2020-04-29 14:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa5c9d9e156110f0eba314c6ba57d530d8fa8c5a7b59035b8bd560d1ba5cf0de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.591: INFO: Pod "webserver-deployment-84855cf797-4tpd8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-4tpd8 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-4tpd8 32c8ff8f-a984-4657-a4b7-de6e688c9de8 87262 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897797 0xc003897798}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.592: INFO: Pod "webserver-deployment-84855cf797-66lqd" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-66lqd webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-66lqd eb203060-14b3-465c-a6a3-6f253d672c63 87332 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897a17 0xc003897a18}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.592: INFO: Pod "webserver-deployment-84855cf797-6fcq5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6fcq5 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-6fcq5 4fc833d9-456c-4e78-ae6e-0874b7ce50b1 87337 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897bf7 0xc003897bf8}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.592: INFO: Pod "webserver-deployment-84855cf797-bfjth" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bfjth webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-bfjth 7c582ff4-8a42-4d13-bda0-66e48d87914d 87316 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc003897e07 0xc003897e08}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.592: INFO: Pod "webserver-deployment-84855cf797-btwb9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-btwb9 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-btwb9 fb64932d-6af6-4e15-acd7-de96cad65ca8 87112 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc0007b8467 0xc0007b8468}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.211\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.211,StartTime:2020-04-29 14:36:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1360c685bcc0308cf19cb77f2448a089801c679e0121cd66e3d573edd11fe10e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.592: INFO: Pod "webserver-deployment-84855cf797-h54kl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-h54kl webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-h54kl efc9b657-d9da-4899-ae16-00aad344470f 87295 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc0007b8d37 0xc0007b8d38}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.593: INFO: Pod "webserver-deployment-84855cf797-hl9sv" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hl9sv webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-hl9sv 7a5223e9-698a-45ef-8902-506af7fd1dd7 87082 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc0007b9587 0xc0007b9588}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.220\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.220,StartTime:2020-04-29 14:36:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3d20b2509ffca81dcb01632b874a67101b79a117eb065f6d1fb20ecd8d1f8ce1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.593: INFO: Pod "webserver-deployment-84855cf797-j8cxg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-j8cxg webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-j8cxg 2676c3d9-b411-47ac-937d-29141577d5fe 87294 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086a197 0xc00086a198}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.593: INFO: Pod "webserver-deployment-84855cf797-kvmgl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kvmgl webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-kvmgl 0719602a-9c71-43b7-86ae-ba6171220054 87287 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086a357 0xc00086a358}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.593: INFO: Pod "webserver-deployment-84855cf797-mkzs8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mkzs8 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-mkzs8 a6d4ee9d-5eb1-41f9-863e-e369087aeb26 87123 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086a667 0xc00086a668}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.221,StartTime:2020-04-29 14:36:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4d23675e1e45ef7e4f34b6f88cc0f75fd592ab2d3fb2192f57eb18e8e90a111f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.593: INFO: Pod "webserver-deployment-84855cf797-qq2p2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qq2p2 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-qq2p2 4542dc47-dbc6-4c67-999b-e639aad024f2 87107 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086a857 0xc00086a858}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.212\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.212,StartTime:2020-04-29 14:36:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2089dc567264c31597ebf7ed6bd95e72465efe60b17b9c368c29410fec6447cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-slcts" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-slcts webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-slcts 442f7a93-19c9-4521-bddd-8b553ce2a364 87318 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086aa07 0xc00086aa08}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-spsw6" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-spsw6 webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-spsw6 42f5f285-738f-4b59-94fe-468f2fb33b3f 87128 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086ab97 0xc00086ab98}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.222,StartTime:2020-04-29 14:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a00ee68eb7d188222bf37467caebd3b3a8654c6a03d3bc74562aee3c5673024d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-t8jqw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-t8jqw webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-t8jqw f9b7233f-7da0-408d-a10a-66900b42c078 87319 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086ad47 0xc00086ad48}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-04-29 14:37:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-vfsck" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vfsck webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-vfsck 0fd65769-5ca3-4644-b1a1-884cb3dc645b 87100 0 2020-04-29 14:36:47 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086aed7 0xc00086aed8}] []  [{kube-controller-manager Update v1 2020-04-29 14:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:36:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.213,StartTime:2020-04-29 14:36:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-29 14:36:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a49bf0b42abbf7c25efe3d28fd7dcaf797f457a0941544855091c6d086ccba18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-wx9gg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wx9gg webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-wx9gg ed7955fd-52a7-45db-b3af-5860c7ebcab0 87266 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086b087 0xc00086b088}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 29 14:37:06.594: INFO: Pod "webserver-deployment-84855cf797-xvfdh" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xvfdh webserver-deployment-84855cf797- deployment-9834 /api/v1/namespaces/deployment-9834/pods/webserver-deployment-84855cf797-xvfdh fdc46fdf-235e-41c5-8c45-aac88f1175a5 87277 0 2020-04-29 14:37:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 1e55abf4-b1c8-4a49-a1db-4373d5425ab5 0xc00086b690 0xc00086b691}] []  [{kube-controller-manager Update v1 2020-04-29 14:37:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e55abf4-b1c8-4a49-a1db-4373d5425ab5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-04-29 14:37:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p6rvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p6rvz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p6rvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-29 14:37:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-04-29 14:37:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Apr 29 14:37:06.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9834" for this suite.

• [SLOW TEST:19.515 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":290,"completed":290,"skipped":4800,"failed":0}
SSSApr 29 14:37:07.139: INFO: Running AfterSuite actions on all nodes
Apr 29 14:37:07.139: INFO: Running AfterSuite actions on node 1
Apr 29 14:37:07.139: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":290,"completed":290,"skipped":4803,"failed":0}

Ran 290 of 5093 Specs in 5674.253 seconds
SUCCESS! -- 290 Passed | 0 Failed | 0 Pending | 4803 Skipped
PASS