I0914 11:48:24.993744 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0914 11:48:24.993932 7 e2e.go:129] Starting e2e run "115970fe-b37c-4cb1-ae93-798b9f1159f5" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1600084103 - Will randomize all specs Will run 303 of 5232 specs Sep 14 11:48:25.053: INFO: >>> kubeConfig: /root/.kube/config Sep 14 11:48:25.055: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 14 11:48:25.073: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 14 11:48:25.200: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 14 11:48:25.200: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 14 11:48:25.200: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 14 11:48:25.206: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 14 11:48:25.206: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 14 11:48:25.206: INFO: e2e test version: v1.19.2-rc.0 Sep 14 11:48:25.207: INFO: kube-apiserver version: v1.19.0 Sep 14 11:48:25.208: INFO: >>> kubeConfig: /root/.kube/config Sep 14 11:48:25.212: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:48:25.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Sep 14 11:48:25.434: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:48:25.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7400" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":1,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:48:25.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 11:48:26.881: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 11:48:28.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735680906, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735680906, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735680906, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735680906, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 11:48:31.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 11:48:31.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:48:33.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9761" for this suite. STEP: Destroying namespace "webhook-9761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.467 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":2,"skipped":56,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:48:33.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 14 11:48:33.244: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-resource-version bff054c6-183f-4182-a29a-88db7a84fce7 250652 0 2020-09-14 11:48:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-14 11:48:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 11:48:33.244: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8269 /api/v1/namespaces/watch-8269/configmaps/e2e-watch-test-resource-version bff054c6-183f-4182-a29a-88db7a84fce7 250653 0 2020-09-14 11:48:33 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-14 11:48:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:48:33.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8269" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":3,"skipped":69,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:48:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 11:48:33.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088" in namespace "projected-5477" to be "Succeeded or Failed" Sep 14 11:48:33.486: INFO: Pod "downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088": Phase="Pending", Reason="", readiness=false. Elapsed: 51.254334ms Sep 14 11:48:35.490: INFO: Pod "downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055295105s Sep 14 11:48:37.494: INFO: Pod "downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05927078s STEP: Saw pod success Sep 14 11:48:37.494: INFO: Pod "downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088" satisfied condition "Succeeded or Failed" Sep 14 11:48:37.496: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088 container client-container: STEP: delete the pod Sep 14 11:48:37.543: INFO: Waiting for pod downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088 to disappear Sep 14 11:48:37.558: INFO: Pod downwardapi-volume-b20c2907-840e-4637-a7ce-4b54c1545088 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:48:37.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5477" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":4,"skipped":77,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:48:37.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3875 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 14 11:48:37.664: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 14 11:48:37.787: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 11:48:39.901: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 11:48:41.793: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 11:48:43.792: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 11:48:45.792: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 11:48:47.793: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 11:48:49.793: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 11:48:51.793: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 14 11:48:51.798: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 11:48:53.803: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 11:48:55.804: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 11:48:57.803: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 11:48:59.803: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 14 11:49:03.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.234:8080/dial?request=hostname&protocol=udp&host=10.244.1.152&port=8081&tries=1'] Namespace:pod-network-test-3875 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 11:49:03.834: INFO: >>> kubeConfig: /root/.kube/config I0914 11:49:03.871604 7 log.go:181] (0xc00346ea50) (0xc0004225a0) Create stream I0914 11:49:03.871627 7 log.go:181] (0xc00346ea50) (0xc0004225a0) Stream added, broadcasting: 1 I0914 11:49:03.875970 7 log.go:181] (0xc00346ea50) Reply frame received for 1 I0914 11:49:03.876019 7 log.go:181] (0xc00346ea50) (0xc00130fc20) Create stream I0914 11:49:03.876034 7 log.go:181] (0xc00346ea50) (0xc00130fc20) Stream added, broadcasting: 3 I0914 11:49:03.877080 7 log.go:181] (0xc00346ea50) Reply frame received for 3 I0914 11:49:03.877143 7 log.go:181] (0xc00346ea50) (0xc00130fd60) Create stream I0914 11:49:03.877161 7 log.go:181] (0xc00346ea50) (0xc00130fd60) Stream added, broadcasting: 5 I0914 11:49:03.878061 7 log.go:181] (0xc00346ea50) Reply frame received for 5 I0914 11:49:03.975421 7 log.go:181] (0xc00346ea50) Data frame received for 3 I0914 11:49:03.975455 7 log.go:181] (0xc00130fc20) (3) Data frame handling I0914 11:49:03.975476 7 log.go:181] (0xc00130fc20) (3) Data frame sent I0914 11:49:03.976311 7 log.go:181] (0xc00346ea50) Data frame received for 3 I0914 11:49:03.976347 7 log.go:181] (0xc00130fc20) (3) Data frame handling I0914 11:49:03.976478 7 log.go:181] (0xc00346ea50) Data frame received for 5 I0914 11:49:03.976502 7 log.go:181] (0xc00130fd60) (5) Data frame handling I0914 11:49:03.978225 7 log.go:181] (0xc00346ea50) Data frame received for 1 I0914 11:49:03.978295 7 log.go:181] (0xc0004225a0) (1) Data frame handling I0914 11:49:03.978352 7 log.go:181] (0xc0004225a0) (1) Data frame sent I0914 11:49:03.978388 7 log.go:181] (0xc00346ea50) (0xc0004225a0) Stream removed, broadcasting: 1 I0914 11:49:03.978418 7 log.go:181] (0xc00346ea50) Go away received I0914 11:49:03.978760 7 log.go:181] (0xc00346ea50) (0xc0004225a0) Stream removed, broadcasting: 1 I0914 11:49:03.978784 7 log.go:181] (0xc00346ea50) (0xc00130fc20) Stream removed, broadcasting: 3 I0914 11:49:03.978796 7 log.go:181] (0xc00346ea50) (0xc00130fd60) Stream removed, broadcasting: 5 Sep 14 11:49:03.978: INFO: Waiting for responses: map[] Sep 14 11:49:03.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.234:8080/dial?request=hostname&protocol=udp&host=10.244.2.232&port=8081&tries=1'] Namespace:pod-network-test-3875 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 11:49:03.982: INFO: >>> kubeConfig: /root/.kube/config I0914 11:49:04.018215 7 log.go:181] (0xc000143600) (0xc00176dcc0) Create stream I0914 11:49:04.018248 7 log.go:181] (0xc000143600) (0xc00176dcc0) Stream added, broadcasting: 1 I0914 11:49:04.021349 7 log.go:181] (0xc000143600) Reply frame received for 1 I0914 11:49:04.021426 7 log.go:181] (0xc000143600) (0xc00068c000) Create stream I0914 11:49:04.021457 7 log.go:181] (0xc000143600) (0xc00068c000) Stream added, broadcasting: 3 I0914 11:49:04.022427 7 log.go:181] (0xc000143600) Reply frame received for 3 I0914 11:49:04.022467 7 log.go:181] (0xc000143600) (0xc00068c5a0) Create stream I0914 11:49:04.022483 7 log.go:181] (0xc000143600) (0xc00068c5a0) Stream added, broadcasting: 5 I0914 11:49:04.023561 7 log.go:181] (0xc000143600) Reply frame received for 5 I0914 11:49:04.093814 7 log.go:181] (0xc000143600) Data frame received for 3 I0914 11:49:04.093846 7 log.go:181] (0xc00068c000) (3) Data frame handling I0914 11:49:04.093862 7 log.go:181] (0xc00068c000) (3) Data frame sent I0914 11:49:04.094689 7 log.go:181] (0xc000143600) Data frame received for 3 I0914 11:49:04.094710 7 log.go:181] (0xc00068c000) (3) Data frame handling I0914 11:49:04.094728 7 log.go:181] (0xc000143600) Data frame received for 5 I0914 11:49:04.094735 7 log.go:181] (0xc00068c5a0) (5) Data frame handling I0914 11:49:04.096002 7 log.go:181] (0xc000143600) Data frame received for 1 I0914 11:49:04.096027 7 log.go:181] (0xc00176dcc0) (1) Data frame handling I0914 11:49:04.096043 7 log.go:181] (0xc00176dcc0) (1) Data frame sent I0914 11:49:04.096258 7 log.go:181] (0xc000143600) (0xc00176dcc0) Stream removed, broadcasting: 1 I0914 11:49:04.096276 7 log.go:181] (0xc000143600) Go away received I0914 11:49:04.096409 7 log.go:181] (0xc000143600) (0xc00176dcc0) Stream removed, broadcasting: 1 I0914 11:49:04.096440 7 log.go:181] (0xc000143600) (0xc00068c000) Stream removed, broadcasting: 3 I0914 11:49:04.096452 7 log.go:181] (0xc000143600) (0xc00068c5a0) Stream removed, broadcasting: 5 Sep 14 11:49:04.096: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:04.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3875" for this suite. • [SLOW TEST:26.523 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":5,"skipped":81,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:04.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 14 11:49:04.212: INFO: Waiting up to 5m0s for pod "pod-73ecd635-8173-4caa-a281-86b072be8a45" in namespace "emptydir-6516" to be "Succeeded or Failed" Sep 14 11:49:04.241: INFO: Pod "pod-73ecd635-8173-4caa-a281-86b072be8a45": Phase="Pending", Reason="", readiness=false. Elapsed: 28.265329ms Sep 14 11:49:06.250: INFO: Pod "pod-73ecd635-8173-4caa-a281-86b072be8a45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037334677s Sep 14 11:49:08.253: INFO: Pod "pod-73ecd635-8173-4caa-a281-86b072be8a45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040216822s STEP: Saw pod success Sep 14 11:49:08.253: INFO: Pod "pod-73ecd635-8173-4caa-a281-86b072be8a45" satisfied condition "Succeeded or Failed" Sep 14 11:49:08.255: INFO: Trying to get logs from node latest-worker2 pod pod-73ecd635-8173-4caa-a281-86b072be8a45 container test-container: STEP: delete the pod Sep 14 11:49:08.386: INFO: Waiting for pod pod-73ecd635-8173-4caa-a281-86b072be8a45 to disappear Sep 14 11:49:08.442: INFO: Pod pod-73ecd635-8173-4caa-a281-86b072be8a45 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:08.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6516" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-326ec9fb-328e-4578-8559-7c3d8c3825ba STEP: Creating a pod to test consume configMaps Sep 14 11:49:08.997: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8" in namespace "projected-3808" to be "Succeeded or Failed" Sep 14 11:49:09.221: INFO: Pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8": Phase="Pending", Reason="", readiness=false. Elapsed: 223.686716ms Sep 14 11:49:11.270: INFO: Pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272755432s Sep 14 11:49:13.275: INFO: Pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277386291s Sep 14 11:49:15.279: INFO: Pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28195718s STEP: Saw pod success Sep 14 11:49:15.279: INFO: Pod "pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8" satisfied condition "Succeeded or Failed" Sep 14 11:49:15.283: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8 container projected-configmap-volume-test: STEP: delete the pod Sep 14 11:49:15.302: INFO: Waiting for pod pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8 to disappear Sep 14 11:49:15.347: INFO: Pod pod-projected-configmaps-5014cd70-a967-472f-8177-5a037878eab8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:15.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3808" for this suite. • [SLOW TEST:6.906 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:15.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 14 11:49:20.160: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7ab4344d-b8b1-4675-b37e-7dc65fdc5051" Sep 14 11:49:20.160: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7ab4344d-b8b1-4675-b37e-7dc65fdc5051" in namespace "pods-7216" to be "terminated due to deadline exceeded" Sep 14 11:49:20.230: INFO: Pod "pod-update-activedeadlineseconds-7ab4344d-b8b1-4675-b37e-7dc65fdc5051": Phase="Running", Reason="", readiness=true. Elapsed: 70.008785ms Sep 14 11:49:22.234: INFO: Pod "pod-update-activedeadlineseconds-7ab4344d-b8b1-4675-b37e-7dc65fdc5051": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.074061929s Sep 14 11:49:22.234: INFO: Pod "pod-update-activedeadlineseconds-7ab4344d-b8b1-4675-b37e-7dc65fdc5051" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:22.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7216" for this suite. • [SLOW TEST:6.886 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":152,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:22.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-1b7bd6f7-088d-4963-8f75-7fa88904f4a9 STEP: Creating secret with name s-test-opt-upd-f51e14b4-b0a1-45a7-8810-fff50bc3ab3e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1b7bd6f7-088d-4963-8f75-7fa88904f4a9 STEP: Updating secret s-test-opt-upd-f51e14b4-b0a1-45a7-8810-fff50bc3ab3e STEP: Creating secret with name s-test-opt-create-72896370-04fa-4098-9834-2bce450fde03 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:30.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9531" for this suite. • [SLOW TEST:8.228 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":162,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:30.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 14 11:49:30.588: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:49:46.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8267" for this suite. • [SLOW TEST:16.118 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":10,"skipped":170,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:49:46.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-ec450422-8426-402f-9406-2578dd01c3cb in namespace container-probe-6444 Sep 14 11:49:52.686: INFO: Started pod liveness-ec450422-8426-402f-9406-2578dd01c3cb in namespace container-probe-6444 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 11:49:52.689: INFO: Initial restart count of pod liveness-ec450422-8426-402f-9406-2578dd01c3cb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:53:53.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6444" for this suite. • [SLOW TEST:246.691 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":184,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:53:53.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 11:53:54.081: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-32e14b3e-641d-48dd-aed3-f21e494d0015" in namespace "security-context-test-2073" to be "Succeeded or Failed" Sep 14 11:53:54.278: INFO: Pod "alpine-nnp-false-32e14b3e-641d-48dd-aed3-f21e494d0015": Phase="Pending", Reason="", readiness=false. Elapsed: 197.003046ms Sep 14 11:53:56.283: INFO: Pod "alpine-nnp-false-32e14b3e-641d-48dd-aed3-f21e494d0015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201602287s Sep 14 11:53:58.286: INFO: Pod "alpine-nnp-false-32e14b3e-641d-48dd-aed3-f21e494d0015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204633282s Sep 14 11:53:58.286: INFO: Pod "alpine-nnp-false-32e14b3e-641d-48dd-aed3-f21e494d0015" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:53:58.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2073" for this suite. • [SLOW TEST:5.041 seconds] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:53:58.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-311853d9-94de-49c2-aa38-5521692127a2 STEP: Creating a pod to test consume secrets Sep 14 11:53:58.465: INFO: Waiting up to 5m0s for pod "pod-secrets-2f946743-46f5-49d3-b071-55daca50a824" in namespace "secrets-9758" to be "Succeeded or Failed" Sep 14 11:53:58.469: INFO: Pod "pod-secrets-2f946743-46f5-49d3-b071-55daca50a824": Phase="Pending", Reason="", readiness=false. Elapsed: 3.872846ms Sep 14 11:54:00.472: INFO: Pod "pod-secrets-2f946743-46f5-49d3-b071-55daca50a824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007482849s Sep 14 11:54:02.477: INFO: Pod "pod-secrets-2f946743-46f5-49d3-b071-55daca50a824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012098519s STEP: Saw pod success Sep 14 11:54:02.477: INFO: Pod "pod-secrets-2f946743-46f5-49d3-b071-55daca50a824" satisfied condition "Succeeded or Failed" Sep 14 11:54:02.479: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2f946743-46f5-49d3-b071-55daca50a824 container secret-volume-test: STEP: delete the pod Sep 14 11:54:02.551: INFO: Waiting for pod pod-secrets-2f946743-46f5-49d3-b071-55daca50a824 to disappear Sep 14 11:54:02.583: INFO: Pod pod-secrets-2f946743-46f5-49d3-b071-55daca50a824 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:54:02.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9758" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:54:02.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 14 11:54:02.717: INFO: Waiting up to 5m0s for pod "pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc" in namespace "emptydir-9474" to be "Succeeded or Failed" Sep 14 11:54:02.730: INFO: Pod "pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.493466ms Sep 14 11:54:04.733: INFO: Pod "pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016282371s Sep 14 11:54:07.531: INFO: Pod "pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.813936031s STEP: Saw pod success Sep 14 11:54:07.531: INFO: Pod "pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc" satisfied condition "Succeeded or Failed" Sep 14 11:54:07.533: INFO: Trying to get logs from node latest-worker2 pod pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc container test-container: STEP: delete the pod Sep 14 11:54:07.694: INFO: Waiting for pod pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc to disappear Sep 14 11:54:07.705: INFO: Pod pod-0b8b666e-6702-4272-b635-c37a0fdbd7bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:54:07.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9474" for this suite. • [SLOW TEST:5.146 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":278,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:54:07.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-a696c4e2-69d2-4478-a44d-fc8f88fcd892 in namespace container-probe-7863 Sep 14 11:54:11.884: INFO: Started pod busybox-a696c4e2-69d2-4478-a44d-fc8f88fcd892 in namespace container-probe-7863 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 11:54:11.887: INFO: Initial restart count of pod busybox-a696c4e2-69d2-4478-a44d-fc8f88fcd892 is 0 Sep 14 11:55:04.384: INFO: Restart count of pod container-probe-7863/busybox-a696c4e2-69d2-4478-a44d-fc8f88fcd892 is now 1 (52.497271569s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:04.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7863" for this suite. • [SLOW TEST:56.686 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":280,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:04.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-2r62 STEP: Creating a pod to test atomic-volume-subpath Sep 14 11:55:04.520: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2r62" in namespace "subpath-5670" to be "Succeeded or Failed" Sep 14 11:55:04.537: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Pending", Reason="", readiness=false. Elapsed: 17.199265ms Sep 14 11:55:06.698: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178093985s Sep 14 11:55:08.703: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 4.182822752s Sep 14 11:55:10.707: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 6.187447302s Sep 14 11:55:12.712: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 8.192259418s Sep 14 11:55:14.716: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 10.195930319s Sep 14 11:55:16.721: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 12.201158086s Sep 14 11:55:18.726: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 14.205583s Sep 14 11:55:20.730: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 16.209877241s Sep 14 11:55:22.734: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 18.214171797s Sep 14 11:55:24.738: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 20.217989141s Sep 14 11:55:26.742: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Running", Reason="", readiness=true. Elapsed: 22.221797288s Sep 14 11:55:28.746: INFO: Pod "pod-subpath-test-configmap-2r62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.225554897s STEP: Saw pod success Sep 14 11:55:28.746: INFO: Pod "pod-subpath-test-configmap-2r62" satisfied condition "Succeeded or Failed" Sep 14 11:55:28.748: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-2r62 container test-container-subpath-configmap-2r62: STEP: delete the pod Sep 14 11:55:28.777: INFO: Waiting for pod pod-subpath-test-configmap-2r62 to disappear Sep 14 11:55:28.799: INFO: Pod pod-subpath-test-configmap-2r62 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2r62 Sep 14 11:55:28.799: INFO: Deleting pod "pod-subpath-test-configmap-2r62" in namespace "subpath-5670" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:28.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5670" for this suite. • [SLOW TEST:24.385 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":16,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:28.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 14 11:55:28.935: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 14 11:55:28.939: INFO: starting watch STEP: patching STEP: updating Sep 14 11:55:28.949: INFO: waiting for watch events with expected annotations Sep 14 11:55:28.949: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:29.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-3754" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":17,"skipped":305,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:29.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4f033938-b858-4e44-9cfb-e634c8e2fec0 STEP: Creating a pod to test consume secrets Sep 14 11:55:29.125: INFO: Waiting up to 5m0s for pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb" in namespace "secrets-2652" to be "Succeeded or Failed" Sep 14 11:55:29.183: INFO: Pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 57.602227ms Sep 14 11:55:31.188: INFO: Pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062666951s Sep 14 11:55:33.191: INFO: Pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.066037288s Sep 14 11:55:35.363: INFO: Pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237828297s STEP: Saw pod success Sep 14 11:55:35.363: INFO: Pod "pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb" satisfied condition "Succeeded or Failed" Sep 14 11:55:35.380: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb container secret-volume-test: STEP: delete the pod Sep 14 11:55:35.446: INFO: Waiting for pod pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb to disappear Sep 14 11:55:35.488: INFO: Pod pod-secrets-fe6ac036-dea1-47fe-a8db-495244a9b0cb no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:35.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2652" for this suite. • [SLOW TEST:6.434 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":312,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:35.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 14 11:55:35.585: INFO: created test-pod-1 Sep 14 11:55:35.632: INFO: created test-pod-2 Sep 14 11:55:35.645: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:35.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-393" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":19,"skipped":319,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:35.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 14 11:55:35.954: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 14 11:55:36.801: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 14 11:55:39.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735681336, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735681336, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735681336, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735681336, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 11:55:41.746: INFO: Waited 728.335729ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:42.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7128" for this suite. • [SLOW TEST:6.538 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":20,"skipped":327,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:42.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 14 11:55:42.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9545' Sep 14 11:55:47.885: INFO: stderr: "" Sep 14 11:55:47.885: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 14 11:55:52.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9545 -o json' Sep 14 11:55:53.038: INFO: stderr: "" Sep 14 11:55:53.039: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-14T11:55:47Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-14T11:55:47Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.6\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-14T11:55:50Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9545\",\n \"resourceVersion\": \"253032\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9545/pods/e2e-test-httpd-pod\",\n \"uid\": \"eff5a6ce-4a6b-4e39-bb05-6cdc958fdaf8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bpk7f\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bpk7f\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bpk7f\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-14T11:55:47Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-14T11:55:50Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-14T11:55:50Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-14T11:55:47Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://bd366dd4fa731f81db4b3d375f44350fe07f0673afe10a89ef2dbda6a7f031c6\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-14T11:55:50Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.6\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.6\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-14T11:55:47Z\"\n }\n}\n" STEP: replace the image in the pod Sep 14 11:55:53.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9545' Sep 14 11:55:53.388: INFO: stderr: "" Sep 14 11:55:53.388: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 14 11:55:53.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9545' Sep 14 11:55:58.310: INFO: stderr: "" Sep 14 11:55:58.310: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:55:58.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9545" for this suite. • [SLOW TEST:15.925 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":21,"skipped":332,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:55:58.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 14 11:55:58.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-5748 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 14 11:55:58.484: INFO: stderr: "" Sep 14 11:55:58.484: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 14 11:55:58.484: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 14 11:55:58.484: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5748" to be "running and ready, or succeeded" Sep 14 11:55:58.498: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.170162ms Sep 14 11:56:00.502: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018335268s Sep 14 11:56:02.506: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.02218927s Sep 14 11:56:02.506: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 14 11:56:02.506: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 14 11:56:02.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748' Sep 14 11:56:02.615: INFO: stderr: "" Sep 14 11:56:02.615: INFO: stdout: "I0914 11:56:00.639714 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6wt 260\nI0914 11:56:00.839880 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/pr2m 215\nI0914 11:56:01.039902 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/wgr 403\nI0914 11:56:01.239836 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/54x 473\nI0914 11:56:01.439852 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/dftf 251\nI0914 11:56:01.639887 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/8wv6 565\nI0914 11:56:01.839894 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/7hjf 235\nI0914 11:56:02.039889 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/gqwj 346\nI0914 11:56:02.239889 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/kgqc 277\nI0914 11:56:02.439871 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/dwc7 505\n" STEP: limiting log lines Sep 14 11:56:02.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748 --tail=1' Sep 14 11:56:02.721: INFO: stderr: "" Sep 14 11:56:02.721: INFO: stdout: "I0914 11:56:02.639855 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nsbg 318\n" Sep 14 11:56:02.721: INFO: got output "I0914 11:56:02.639855 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nsbg 318\n" STEP: limiting log bytes Sep 14 11:56:02.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748 --limit-bytes=1' Sep 14 11:56:02.842: INFO: stderr: "" Sep 14 11:56:02.842: INFO: stdout: "I" Sep 14 11:56:02.842: INFO: got output "I" STEP: exposing timestamps Sep 14 11:56:02.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748 --tail=1 --timestamps' Sep 14 11:56:02.955: INFO: stderr: "" Sep 14 11:56:02.955: INFO: stdout: "2020-09-14T11:56:02.839998622Z I0914 11:56:02.839870 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/xrzp 343\n" Sep 14 11:56:02.955: INFO: got output "2020-09-14T11:56:02.839998622Z I0914 11:56:02.839870 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/xrzp 343\n" STEP: restricting to a time range Sep 14 11:56:05.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748 --since=1s' Sep 14 11:56:05.619: INFO: stderr: "" Sep 14 11:56:05.619: INFO: stdout: "I0914 11:56:04.639879 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/njfx 546\nI0914 11:56:04.839861 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/6n6 557\nI0914 11:56:05.039852 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/bfl 481\nI0914 11:56:05.239872 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/ccp 555\nI0914 11:56:05.439905 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/vv5 466\n" Sep 14 11:56:05.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5748 --since=24h' Sep 14 11:56:05.746: INFO: stderr: "" Sep 14 11:56:05.747: INFO: stdout: "I0914 11:56:00.639714 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/6wt 260\nI0914 11:56:00.839880 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/pr2m 215\nI0914 11:56:01.039902 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/wgr 403\nI0914 11:56:01.239836 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/54x 473\nI0914 11:56:01.439852 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/dftf 251\nI0914 11:56:01.639887 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/8wv6 565\nI0914 11:56:01.839894 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/7hjf 235\nI0914 11:56:02.039889 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/gqwj 346\nI0914 11:56:02.239889 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/kgqc 277\nI0914 11:56:02.439871 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/dwc7 505\nI0914 11:56:02.639855 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/nsbg 318\nI0914 11:56:02.839870 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/xrzp 343\nI0914 11:56:03.039850 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/ckvk 404\nI0914 11:56:03.239884 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/42bg 390\nI0914 11:56:03.439882 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/fmh4 229\nI0914 11:56:03.639837 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/c2lv 249\nI0914 11:56:03.839815 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/jpsw 540\nI0914 11:56:04.039838 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/pz4 446\nI0914 11:56:04.239847 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/qh27 422\nI0914 11:56:04.439904 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/nh7 203\nI0914 11:56:04.639879 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/njfx 546\nI0914 11:56:04.839861 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/6n6 557\nI0914 11:56:05.039852 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/bfl 481\nI0914 11:56:05.239872 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/ccp 555\nI0914 11:56:05.439905 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/vv5 466\nI0914 11:56:05.639841 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/2fv 453\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 14 11:56:05.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5748' Sep 14 11:56:08.309: INFO: stderr: "" Sep 14 11:56:08.309: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:56:08.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5748" for this suite. • [SLOW TEST:10.001 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":22,"skipped":339,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:56:08.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-4d71e41f-51bb-4a9d-98d4-cff701b6ee6a STEP: Creating a pod to test consume configMaps Sep 14 11:56:08.616: INFO: Waiting up to 5m0s for pod "pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8" in namespace "configmap-6923" to be "Succeeded or Failed" Sep 14 11:56:08.661: INFO: Pod "pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.798801ms Sep 14 11:56:10.664: INFO: Pod "pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047974966s Sep 14 11:56:12.669: INFO: Pod "pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052590253s STEP: Saw pod success Sep 14 11:56:12.669: INFO: Pod "pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8" satisfied condition "Succeeded or Failed" Sep 14 11:56:12.671: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8 container configmap-volume-test: STEP: delete the pod Sep 14 11:56:12.712: INFO: Waiting for pod pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8 to disappear Sep 14 11:56:12.741: INFO: Pod pod-configmaps-3601d703-44ff-41c5-8950-6184b91329c8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:56:12.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6923" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":343,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:56:12.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 14 11:56:16.886: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:56:16.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2739" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":344,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:56:16.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 14 11:56:17.021: INFO: Waiting up to 1m0s for all nodes to be ready Sep 14 11:57:17.042: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 14 11:57:17.236: INFO: Created pod: pod0-sched-preemption-low-priority Sep 14 11:57:17.279: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 11:58:03.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5841" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:106.504 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":25,"skipped":346,"failed":0} [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 11:58:03.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-189c997b-20a8-46d1-891e-e47380358991 in namespace container-probe-6914 Sep 14 11:58:07.842: INFO: Started pod busybox-189c997b-20a8-46d1-891e-e47380358991 in namespace container-probe-6914 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 11:58:07.844: INFO: Initial restart count of pod busybox-189c997b-20a8-46d1-891e-e47380358991 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:02:09.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6914" for this suite. • [SLOW TEST:246.171 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":346,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:02:09.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 14 12:02:09.692: INFO: Waiting up to 5m0s for pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40" in namespace "emptydir-6031" to be "Succeeded or Failed" Sep 14 12:02:09.703: INFO: Pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40": Phase="Pending", Reason="", readiness=false. Elapsed: 11.100896ms Sep 14 12:02:11.725: INFO: Pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033118272s Sep 14 12:02:13.729: INFO: Pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40": Phase="Running", Reason="", readiness=true. Elapsed: 4.036534952s Sep 14 12:02:15.733: INFO: Pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040745729s STEP: Saw pod success Sep 14 12:02:15.733: INFO: Pod "pod-e5617202-94d2-441a-8cac-5d9c766bac40" satisfied condition "Succeeded or Failed" Sep 14 12:02:15.737: INFO: Trying to get logs from node latest-worker2 pod pod-e5617202-94d2-441a-8cac-5d9c766bac40 container test-container: STEP: delete the pod Sep 14 12:02:15.816: INFO: Waiting for pod pod-e5617202-94d2-441a-8cac-5d9c766bac40 to disappear Sep 14 12:02:15.833: INFO: Pod pod-e5617202-94d2-441a-8cac-5d9c766bac40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:02:15.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6031" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:02:15.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:02:15.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6325' Sep 14 12:02:16.229: INFO: stderr: "" Sep 14 12:02:16.229: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 14 12:02:16.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6325' Sep 14 12:02:16.578: INFO: stderr: "" Sep 14 12:02:16.578: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 14 12:02:17.613: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:02:17.613: INFO: Found 0 / 1 Sep 14 12:02:18.583: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:02:18.583: INFO: Found 0 / 1 Sep 14 12:02:19.583: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:02:19.583: INFO: Found 0 / 1 Sep 14 12:02:20.583: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:02:20.583: INFO: Found 1 / 1 Sep 14 12:02:20.583: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 14 12:02:20.586: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:02:20.586: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 14 12:02:20.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config describe pod agnhost-primary-d7bgn --namespace=kubectl-6325' Sep 14 12:02:20.700: INFO: stderr: "" Sep 14 12:02:20.700: INFO: stdout: "Name: agnhost-primary-d7bgn\nNamespace: kubectl-6325\nPriority: 0\nNode: latest-worker2/172.18.0.16\nStart Time: Mon, 14 Sep 2020 12:02:16 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.24\nIPs:\n IP: 10.244.2.24\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://1840544d1a7c6e7d3a0f3bd582da780fc9fe76952dd586680a09d20f86aa6c2f\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 14 Sep 2020 12:02:18 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-xngz6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-xngz6:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-xngz6\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6325/agnhost-primary-d7bgn to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-primary\n" Sep 14 12:02:20.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-6325' Sep 14 12:02:20.841: INFO: stderr: "" Sep 14 12:02:20.841: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6325\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-d7bgn\n" Sep 14 12:02:20.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-6325' Sep 14 12:02:20.971: INFO: stderr: "" Sep 14 12:02:20.971: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6325\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.98.89.214\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.24:6379\nSession Affinity: None\nEvents: \n" Sep 14 12:02:20.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config describe node latest-control-plane' Sep 14 12:02:21.112: INFO: stderr: "" Sep 14 12:02:21.112: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 13 Sep 2020 16:59:05 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 14 Sep 2020 12:02:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 14 Sep 2020 11:59:13 +0000 Sun, 13 Sep 2020 16:59:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 14 Sep 2020 11:59:13 +0000 Sun, 13 Sep 2020 16:59:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 14 Sep 2020 11:59:13 +0000 Sun, 13 Sep 2020 16:59:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 14 Sep 2020 11:59:13 +0000 Sun, 13 Sep 2020 17:00:08 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.14\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 8f70c1d290e5458cb4d582a21df6866d\n System UUID: 29af9a65-ac2f-4a97-ac5e-53e2f2d2d58d\n Boot ID: 6cae8cc9-70fd-486a-9495-a1a7da130c42\n Kernel Version: 4.15.0-115-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kindnet-zclpj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-proxy-5w7cq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 14 12:02:21.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config describe namespace kubectl-6325' Sep 14 12:02:21.218: INFO: stderr: "" Sep 14 12:02:21.219: INFO: stdout: "Name: kubectl-6325\nLabels: e2e-framework=kubectl\n e2e-run=115970fe-b37c-4cb1-ae93-798b9f1159f5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:02:21.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6325" for this suite. • [SLOW TEST:5.379 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":28,"skipped":375,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:02:21.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:02:21.315: INFO: Create a RollingUpdate DaemonSet Sep 14 12:02:21.319: INFO: Check that daemon pods launch on every node of the cluster Sep 14 12:02:21.342: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:21.355: INFO: Number of nodes with available pods: 0 Sep 14 12:02:21.355: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:02:22.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:22.362: INFO: Number of nodes with available pods: 0 Sep 14 12:02:22.362: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:02:23.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:23.363: INFO: Number of nodes with available pods: 0 Sep 14 12:02:23.363: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:02:24.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:24.625: INFO: Number of nodes with available pods: 0 Sep 14 12:02:24.625: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:02:25.361: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:25.363: INFO: Number of nodes with available pods: 2 Sep 14 12:02:25.363: INFO: Number of running nodes: 2, number of available pods: 2 Sep 14 12:02:25.363: INFO: Update the DaemonSet to trigger a rollout Sep 14 12:02:25.371: INFO: Updating DaemonSet daemon-set Sep 14 12:02:37.738: INFO: Roll back the DaemonSet before rollout is complete Sep 14 12:02:37.767: INFO: Updating DaemonSet daemon-set Sep 14 12:02:37.767: INFO: Make sure DaemonSet rollback is complete Sep 14 12:02:37.788: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:37.788: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:37.800: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:39.385: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:39.385: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:39.390: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:41.159: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:41.159: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:41.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:41.805: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:41.805: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:41.810: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:42.805: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:42.805: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:42.810: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:43.805: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:43.805: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:43.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:44.805: INFO: Wrong image for pod: daemon-set-6rz5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 14 12:02:44.805: INFO: Pod daemon-set-6rz5r is not available Sep 14 12:02:44.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:02:45.805: INFO: Pod daemon-set-rmkb5 is not available Sep 14 12:02:45.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1425, will wait for the garbage collector to delete the pods Sep 14 12:02:45.877: INFO: Deleting DaemonSet.extensions daemon-set took: 7.729847ms Sep 14 12:02:46.277: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.254176ms Sep 14 12:02:55.994: INFO: Number of nodes with available pods: 0 Sep 14 12:02:55.994: INFO: Number of running nodes: 0, number of available pods: 0 Sep 14 12:02:55.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1425/daemonsets","resourceVersion":"255136"},"items":null} Sep 14 12:02:56.002: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1425/pods","resourceVersion":"255136"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:02:56.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1425" for this suite. • [SLOW TEST:34.792 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":29,"skipped":378,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:02:56.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-52bae49c-ebb5-42d2-b158-f13a97ed35c1 STEP: Creating configMap with name cm-test-opt-upd-aab1d854-3cec-4dba-9c0e-21213cff022d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-52bae49c-ebb5-42d2-b158-f13a97ed35c1 STEP: Updating configmap cm-test-opt-upd-aab1d854-3cec-4dba-9c0e-21213cff022d STEP: Creating configMap with name cm-test-opt-create-eac2e1c2-3f18-4829-b75a-8fabe1389437 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3277" for this suite. • [SLOW TEST:10.238 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:06.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-69d8b1cd-fd8a-4ecf-b7b7-0450e95cbc47 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7052" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":31,"skipped":425,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:06.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3865 STEP: creating service affinity-clusterip in namespace services-3865 STEP: creating replication controller affinity-clusterip in namespace services-3865 I0914 12:03:06.648488 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3865, replica count: 3 I0914 12:03:09.699036 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:03:12.699268 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 12:03:12.708: INFO: Creating new exec pod Sep 14 12:03:17.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3865 execpod-affinitycmj2q -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 14 12:03:18.125: INFO: stderr: "I0914 12:03:18.023673 373 log.go:181] (0xc000188370) (0xc00064e140) Create stream\nI0914 12:03:18.023746 373 log.go:181] (0xc000188370) (0xc00064e140) Stream added, broadcasting: 1\nI0914 12:03:18.026406 373 log.go:181] (0xc000188370) Reply frame received for 1\nI0914 12:03:18.026454 373 log.go:181] (0xc000188370) (0xc000cf60a0) Create stream\nI0914 12:03:18.026466 373 log.go:181] (0xc000188370) (0xc000cf60a0) Stream added, broadcasting: 3\nI0914 12:03:18.027700 373 log.go:181] (0xc000188370) Reply frame received for 3\nI0914 12:03:18.027753 373 log.go:181] (0xc000188370) (0xc000711cc0) Create stream\nI0914 12:03:18.027768 373 log.go:181] (0xc000188370) (0xc000711cc0) Stream added, broadcasting: 5\nI0914 12:03:18.028874 373 log.go:181] (0xc000188370) Reply frame received for 5\nI0914 12:03:18.117186 373 log.go:181] (0xc000188370) Data frame received for 5\nI0914 12:03:18.117210 373 log.go:181] (0xc000711cc0) (5) Data frame handling\nI0914 12:03:18.117227 373 log.go:181] (0xc000711cc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0914 12:03:18.117646 373 log.go:181] (0xc000188370) Data frame received for 5\nI0914 12:03:18.117678 373 log.go:181] (0xc000711cc0) (5) Data frame handling\nI0914 12:03:18.117711 373 log.go:181] (0xc000711cc0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0914 12:03:18.117900 373 log.go:181] (0xc000188370) Data frame received for 3\nI0914 12:03:18.117912 373 log.go:181] (0xc000cf60a0) (3) Data frame handling\nI0914 12:03:18.118055 373 log.go:181] (0xc000188370) Data frame received for 5\nI0914 12:03:18.118076 373 log.go:181] (0xc000711cc0) (5) Data frame handling\nI0914 12:03:18.120224 373 log.go:181] (0xc000188370) Data frame received for 1\nI0914 12:03:18.120256 373 log.go:181] (0xc00064e140) (1) Data frame handling\nI0914 12:03:18.120271 373 log.go:181] (0xc00064e140) (1) Data frame sent\nI0914 12:03:18.120298 373 log.go:181] (0xc000188370) (0xc00064e140) Stream removed, broadcasting: 1\nI0914 12:03:18.120318 373 log.go:181] (0xc000188370) Go away received\nI0914 12:03:18.120609 373 log.go:181] (0xc000188370) (0xc00064e140) Stream removed, broadcasting: 1\nI0914 12:03:18.120623 373 log.go:181] (0xc000188370) (0xc000cf60a0) Stream removed, broadcasting: 3\nI0914 12:03:18.120629 373 log.go:181] (0xc000188370) (0xc000711cc0) Stream removed, broadcasting: 5\n" Sep 14 12:03:18.125: INFO: stdout: "" Sep 14 12:03:18.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3865 execpod-affinitycmj2q -- /bin/sh -x -c nc -zv -t -w 2 10.103.215.128 80' Sep 14 12:03:18.352: INFO: stderr: "I0914 12:03:18.266240 391 log.go:181] (0xc000d0d290) (0xc000d04960) Create stream\nI0914 12:03:18.266302 391 log.go:181] (0xc000d0d290) (0xc000d04960) Stream added, broadcasting: 1\nI0914 12:03:18.271692 391 log.go:181] (0xc000d0d290) Reply frame received for 1\nI0914 12:03:18.271732 391 log.go:181] (0xc000d0d290) (0xc000b6c000) Create stream\nI0914 12:03:18.271743 391 log.go:181] (0xc000d0d290) (0xc000b6c000) Stream added, broadcasting: 3\nI0914 12:03:18.272783 391 log.go:181] (0xc000d0d290) Reply frame received for 3\nI0914 12:03:18.272820 391 log.go:181] (0xc000d0d290) (0xc000d04000) Create stream\nI0914 12:03:18.272829 391 log.go:181] (0xc000d0d290) (0xc000d04000) Stream added, broadcasting: 5\nI0914 12:03:18.273707 391 log.go:181] (0xc000d0d290) Reply frame received for 5\nI0914 12:03:18.345362 391 log.go:181] (0xc000d0d290) Data frame received for 5\nI0914 12:03:18.345389 391 log.go:181] (0xc000d04000) (5) Data frame handling\nI0914 12:03:18.345398 391 log.go:181] (0xc000d04000) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.215.128 80\nConnection to 10.103.215.128 80 port [tcp/http] succeeded!\nI0914 12:03:18.345426 391 log.go:181] (0xc000d0d290) Data frame received for 3\nI0914 12:03:18.345455 391 log.go:181] (0xc000b6c000) (3) Data frame handling\nI0914 12:03:18.345486 391 log.go:181] (0xc000d0d290) Data frame received for 5\nI0914 12:03:18.345513 391 log.go:181] (0xc000d04000) (5) Data frame handling\nI0914 12:03:18.346938 391 log.go:181] (0xc000d0d290) Data frame received for 1\nI0914 12:03:18.347055 391 log.go:181] (0xc000d04960) (1) Data frame handling\nI0914 12:03:18.347097 391 log.go:181] (0xc000d04960) (1) Data frame sent\nI0914 12:03:18.347130 391 log.go:181] (0xc000d0d290) (0xc000d04960) Stream removed, broadcasting: 1\nI0914 12:03:18.347149 391 log.go:181] (0xc000d0d290) Go away received\nI0914 12:03:18.347676 391 log.go:181] (0xc000d0d290) (0xc000d04960) Stream removed, broadcasting: 1\nI0914 12:03:18.347699 391 log.go:181] (0xc000d0d290) (0xc000b6c000) Stream removed, broadcasting: 3\nI0914 12:03:18.347721 391 log.go:181] (0xc000d0d290) (0xc000d04000) Stream removed, broadcasting: 5\n" Sep 14 12:03:18.352: INFO: stdout: "" Sep 14 12:03:18.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3865 execpod-affinitycmj2q -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.215.128:80/ ; done' Sep 14 12:03:18.677: INFO: stderr: "I0914 12:03:18.488357 409 log.go:181] (0xc0001a00b0) (0xc000a83860) Create stream\nI0914 12:03:18.488440 409 log.go:181] (0xc0001a00b0) (0xc000a83860) Stream added, broadcasting: 1\nI0914 12:03:18.491216 409 log.go:181] (0xc0001a00b0) Reply frame received for 1\nI0914 12:03:18.491247 409 log.go:181] (0xc0001a00b0) (0xc0007a4000) Create stream\nI0914 12:03:18.491257 409 log.go:181] (0xc0001a00b0) (0xc0007a4000) Stream added, broadcasting: 3\nI0914 12:03:18.492322 409 log.go:181] (0xc0001a00b0) Reply frame received for 3\nI0914 12:03:18.492369 409 log.go:181] (0xc0001a00b0) (0xc000a83900) Create stream\nI0914 12:03:18.492383 409 log.go:181] (0xc0001a00b0) (0xc000a83900) Stream added, broadcasting: 5\nI0914 12:03:18.493473 409 log.go:181] (0xc0001a00b0) Reply frame received for 5\nI0914 12:03:18.564699 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.564746 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.564764 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.564783 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.564793 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.564810 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.568096 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.568241 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.568296 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.568450 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.568465 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.568471 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.568495 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.568539 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.568575 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.574731 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.574755 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.574774 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.575218 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.575247 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.575262 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.575283 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.575293 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.575304 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.582555 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.582577 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.582596 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.583569 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.583604 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.583643 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.583672 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.583693 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.583712 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.590274 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.590297 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.590315 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.590953 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.590980 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.591011 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.591024 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.591050 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.591071 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.596234 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.596248 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.596256 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.596842 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.596856 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.596869 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.596903 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.596925 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.596948 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.603309 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.603335 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.603359 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.604099 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.604123 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.604237 409 log.go:181] (0xc000a83900) (5) Data frame sent\nI0914 12:03:18.604262 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.604272 409 log.go:181] (0xc000a83900) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.604302 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.604323 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.604338 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.604357 409 log.go:181] (0xc000a83900) (5) Data frame sent\nI0914 12:03:18.609390 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.609416 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.609435 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.610111 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.610134 409 log.go:181] (0xc000a83900) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.610155 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.610179 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.610199 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.610214 409 log.go:181] (0xc000a83900) (5) Data frame sent\nI0914 12:03:18.615167 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.615186 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.615196 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.616078 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.616102 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.616124 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.616228 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.616251 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.616264 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.621222 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.621251 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.621270 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.621968 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.622001 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.622016 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.622036 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.622046 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.622059 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.628035 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.628059 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.628079 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.628936 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.628979 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.628996 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.629014 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.629025 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.629041 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.636202 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.636232 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.636249 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.637068 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.637091 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.637107 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.637120 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.637130 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.637148 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.642512 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.642541 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.642562 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.643268 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.643304 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.643317 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.643336 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.643346 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.643384 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.648980 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.649005 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.649022 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.649656 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.649688 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.649700 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.649712 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.649719 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.649726 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.655533 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.655562 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.655587 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.655928 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.655955 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.655966 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.655982 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.655990 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.655999 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.660826 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.660847 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.660866 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.661699 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.661723 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.661735 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.661752 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.661815 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.661836 409 log.go:181] (0xc000a83900) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.215.128:80/\nI0914 12:03:18.672883 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.672924 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.672942 409 log.go:181] (0xc0007a4000) (3) Data frame sent\nI0914 12:03:18.672956 409 log.go:181] (0xc0001a00b0) Data frame received for 3\nI0914 12:03:18.672968 409 log.go:181] (0xc0007a4000) (3) Data frame handling\nI0914 12:03:18.673019 409 log.go:181] (0xc0001a00b0) Data frame received for 5\nI0914 12:03:18.673109 409 log.go:181] (0xc000a83900) (5) Data frame handling\nI0914 12:03:18.674277 409 log.go:181] (0xc0001a00b0) Data frame received for 1\nI0914 12:03:18.674302 409 log.go:181] (0xc000a83860) (1) Data frame handling\nI0914 12:03:18.674322 409 log.go:181] (0xc000a83860) (1) Data frame sent\nI0914 12:03:18.674339 409 log.go:181] (0xc0001a00b0) (0xc000a83860) Stream removed, broadcasting: 1\nI0914 12:03:18.674357 409 log.go:181] (0xc0001a00b0) Go away received\nI0914 12:03:18.674734 409 log.go:181] (0xc0001a00b0) (0xc000a83860) Stream removed, broadcasting: 1\nI0914 12:03:18.674751 409 log.go:181] (0xc0001a00b0) (0xc0007a4000) Stream removed, broadcasting: 3\nI0914 12:03:18.674764 409 log.go:181] (0xc0001a00b0) (0xc000a83900) Stream removed, broadcasting: 5\n" Sep 14 12:03:18.678: INFO: stdout: "\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw\naffinity-clusterip-wlbjw" Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Received response from host: affinity-clusterip-wlbjw Sep 14 12:03:18.678: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-3865, will wait for the garbage collector to delete the pods Sep 14 12:03:18.755: INFO: Deleting ReplicationController affinity-clusterip took: 6.46516ms Sep 14 12:03:19.156: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.240686ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:35.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3865" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.307 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":32,"skipped":440,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 14 12:03:35.843: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:35.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4589" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":33,"skipped":445,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:35.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-6dc50657-e36f-4f18-bf35-b2079b339a1d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:42.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6830" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":446,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:42.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 14 12:03:42.209: INFO: Waiting up to 5m0s for pod "var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b" in namespace "var-expansion-9044" to be "Succeeded or Failed" Sep 14 12:03:42.247: INFO: Pod "var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.254216ms Sep 14 12:03:44.251: INFO: Pod "var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041359434s Sep 14 12:03:46.264: INFO: Pod "var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055009671s STEP: Saw pod success Sep 14 12:03:46.264: INFO: Pod "var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b" satisfied condition "Succeeded or Failed" Sep 14 12:03:46.267: INFO: Trying to get logs from node latest-worker2 pod var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b container dapi-container: STEP: delete the pod Sep 14 12:03:46.316: INFO: Waiting for pod var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b to disappear Sep 14 12:03:46.528: INFO: Pod var-expansion-ecc61f07-0d09-4c53-8685-a3d7155c5a2b no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:03:46.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9044" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":460,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:03:46.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0914 12:03:56.717516 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 12:04:58.737: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:04:58.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-368" for this suite. • [SLOW TEST:72.209 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":36,"skipped":476,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:04:58.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:05:05.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1653" for this suite. • [SLOW TEST:7.128 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":37,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:05:05.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 14 12:05:05.940: INFO: created test-podtemplate-1 Sep 14 12:05:05.960: INFO: created test-podtemplate-2 Sep 14 12:05:05.989: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 14 12:05:05.992: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 14 12:05:06.013: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:05:06.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3123" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":38,"skipped":542,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:05:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5323 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5323 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5323 Sep 14 12:05:06.214: INFO: Found 0 stateful pods, waiting for 1 Sep 14 12:05:16.218: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 14 12:05:16.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:05:16.464: INFO: stderr: "I0914 12:05:16.343298 445 log.go:181] (0xc00003a420) (0xc000991400) Create stream\nI0914 12:05:16.343349 445 log.go:181] (0xc00003a420) (0xc000991400) Stream added, broadcasting: 1\nI0914 12:05:16.344754 445 log.go:181] (0xc00003a420) Reply frame received for 1\nI0914 12:05:16.344794 445 log.go:181] (0xc00003a420) (0xc0003d6320) Create stream\nI0914 12:05:16.344819 445 log.go:181] (0xc00003a420) (0xc0003d6320) Stream added, broadcasting: 3\nI0914 12:05:16.345437 445 log.go:181] (0xc00003a420) Reply frame received for 3\nI0914 12:05:16.345467 445 log.go:181] (0xc00003a420) (0xc00031c6e0) Create stream\nI0914 12:05:16.345475 445 log.go:181] (0xc00003a420) (0xc00031c6e0) Stream added, broadcasting: 5\nI0914 12:05:16.346124 445 log.go:181] (0xc00003a420) Reply frame received for 5\nI0914 12:05:16.416604 445 log.go:181] (0xc00003a420) Data frame received for 5\nI0914 12:05:16.416648 445 log.go:181] (0xc00031c6e0) (5) Data frame handling\nI0914 12:05:16.416681 445 log.go:181] (0xc00031c6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:05:16.458535 445 log.go:181] (0xc00003a420) Data frame received for 3\nI0914 12:05:16.458566 445 log.go:181] (0xc0003d6320) (3) Data frame handling\nI0914 12:05:16.458587 445 log.go:181] (0xc0003d6320) (3) Data frame sent\nI0914 12:05:16.458751 445 log.go:181] (0xc00003a420) Data frame received for 5\nI0914 12:05:16.458765 445 log.go:181] (0xc00031c6e0) (5) Data frame handling\nI0914 12:05:16.458783 445 log.go:181] (0xc00003a420) Data frame received for 3\nI0914 12:05:16.458811 445 log.go:181] (0xc0003d6320) (3) Data frame handling\nI0914 12:05:16.461244 445 log.go:181] (0xc00003a420) Data frame received for 1\nI0914 12:05:16.461269 445 log.go:181] (0xc000991400) (1) Data frame handling\nI0914 12:05:16.461278 445 log.go:181] (0xc000991400) (1) Data frame sent\nI0914 12:05:16.461288 445 log.go:181] (0xc00003a420) (0xc000991400) Stream removed, broadcasting: 1\nI0914 12:05:16.461348 445 log.go:181] (0xc00003a420) Go away received\nI0914 12:05:16.461585 445 log.go:181] (0xc00003a420) (0xc000991400) Stream removed, broadcasting: 1\nI0914 12:05:16.461600 445 log.go:181] (0xc00003a420) (0xc0003d6320) Stream removed, broadcasting: 3\nI0914 12:05:16.461608 445 log.go:181] (0xc00003a420) (0xc00031c6e0) Stream removed, broadcasting: 5\n" Sep 14 12:05:16.464: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:05:16.464: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:05:16.469: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 14 12:05:26.473: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:05:26.473: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:05:26.541: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:26.541: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:26.541: INFO: Sep 14 12:05:26.541: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 14 12:05:27.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.942868509s Sep 14 12:05:28.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.853260874s Sep 14 12:05:29.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.743446303s Sep 14 12:05:30.793: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.714425953s Sep 14 12:05:31.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.691004938s Sep 14 12:05:32.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.687366212s Sep 14 12:05:33.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.682250282s Sep 14 12:05:34.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.678304024s Sep 14 12:05:35.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 673.061068ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5323 Sep 14 12:05:36.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:05:37.030: INFO: stderr: "I0914 12:05:36.944235 464 log.go:181] (0xc00011efd0) (0xc000e10820) Create stream\nI0914 12:05:36.944303 464 log.go:181] (0xc00011efd0) (0xc000e10820) Stream added, broadcasting: 1\nI0914 12:05:36.948546 464 log.go:181] (0xc00011efd0) Reply frame received for 1\nI0914 12:05:36.948600 464 log.go:181] (0xc00011efd0) (0xc000e10000) Create stream\nI0914 12:05:36.948615 464 log.go:181] (0xc00011efd0) (0xc000e10000) Stream added, broadcasting: 3\nI0914 12:05:36.949493 464 log.go:181] (0xc00011efd0) Reply frame received for 3\nI0914 12:05:36.949521 464 log.go:181] (0xc00011efd0) (0xc000e100a0) Create stream\nI0914 12:05:36.949531 464 log.go:181] (0xc00011efd0) (0xc000e100a0) Stream added, broadcasting: 5\nI0914 12:05:36.950364 464 log.go:181] (0xc00011efd0) Reply frame received for 5\nI0914 12:05:37.025354 464 log.go:181] (0xc00011efd0) Data frame received for 3\nI0914 12:05:37.025371 464 log.go:181] (0xc000e10000) (3) Data frame handling\nI0914 12:05:37.025379 464 log.go:181] (0xc000e10000) (3) Data frame sent\nI0914 12:05:37.025539 464 log.go:181] (0xc00011efd0) Data frame received for 3\nI0914 12:05:37.025571 464 log.go:181] (0xc000e10000) (3) Data frame handling\nI0914 12:05:37.025611 464 log.go:181] (0xc00011efd0) Data frame received for 5\nI0914 12:05:37.025636 464 log.go:181] (0xc000e100a0) (5) Data frame handling\nI0914 12:05:37.025657 464 log.go:181] (0xc000e100a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 12:05:37.025682 464 log.go:181] (0xc00011efd0) Data frame received for 5\nI0914 12:05:37.025700 464 log.go:181] (0xc000e100a0) (5) Data frame handling\nI0914 12:05:37.027240 464 log.go:181] (0xc00011efd0) Data frame received for 1\nI0914 12:05:37.027250 464 log.go:181] (0xc000e10820) (1) Data frame handling\nI0914 12:05:37.027256 464 log.go:181] (0xc000e10820) (1) Data frame sent\nI0914 12:05:37.027263 464 log.go:181] (0xc00011efd0) (0xc000e10820) Stream removed, broadcasting: 1\nI0914 12:05:37.027513 464 log.go:181] (0xc00011efd0) (0xc000e10820) Stream removed, broadcasting: 1\nI0914 12:05:37.027529 464 log.go:181] (0xc00011efd0) (0xc000e10000) Stream removed, broadcasting: 3\nI0914 12:05:37.027538 464 log.go:181] (0xc00011efd0) (0xc000e100a0) Stream removed, broadcasting: 5\n" Sep 14 12:05:37.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:05:37.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:05:37.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:05:37.341: INFO: stderr: "I0914 12:05:37.265050 482 log.go:181] (0xc0007c96b0) (0xc0007c0960) Create stream\nI0914 12:05:37.265146 482 log.go:181] (0xc0007c96b0) (0xc0007c0960) Stream added, broadcasting: 1\nI0914 12:05:37.272082 482 log.go:181] (0xc0007c96b0) Reply frame received for 1\nI0914 12:05:37.272127 482 log.go:181] (0xc0007c96b0) (0xc000d36000) Create stream\nI0914 12:05:37.272208 482 log.go:181] (0xc0007c96b0) (0xc000d36000) Stream added, broadcasting: 3\nI0914 12:05:37.273879 482 log.go:181] (0xc0007c96b0) Reply frame received for 3\nI0914 12:05:37.273916 482 log.go:181] (0xc0007c96b0) (0xc0007c0000) Create stream\nI0914 12:05:37.273927 482 log.go:181] (0xc0007c96b0) (0xc0007c0000) Stream added, broadcasting: 5\nI0914 12:05:37.274783 482 log.go:181] (0xc0007c96b0) Reply frame received for 5\nI0914 12:05:37.336892 482 log.go:181] (0xc0007c96b0) Data frame received for 5\nI0914 12:05:37.336917 482 log.go:181] (0xc0007c0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0914 12:05:37.336951 482 log.go:181] (0xc0007c96b0) Data frame received for 3\nI0914 12:05:37.336997 482 log.go:181] (0xc000d36000) (3) Data frame handling\nI0914 12:05:37.337016 482 log.go:181] (0xc000d36000) (3) Data frame sent\nI0914 12:05:37.337057 482 log.go:181] (0xc0007c0000) (5) Data frame sent\nI0914 12:05:37.337109 482 log.go:181] (0xc0007c96b0) Data frame received for 5\nI0914 12:05:37.337130 482 log.go:181] (0xc0007c0000) (5) Data frame handling\nI0914 12:05:37.337166 482 log.go:181] (0xc0007c96b0) Data frame received for 3\nI0914 12:05:37.337187 482 log.go:181] (0xc000d36000) (3) Data frame handling\nI0914 12:05:37.338338 482 log.go:181] (0xc0007c96b0) Data frame received for 1\nI0914 12:05:37.338353 482 log.go:181] (0xc0007c0960) (1) Data frame handling\nI0914 12:05:37.338366 482 log.go:181] (0xc0007c0960) (1) Data frame sent\nI0914 12:05:37.338378 482 log.go:181] (0xc0007c96b0) (0xc0007c0960) Stream removed, broadcasting: 1\nI0914 12:05:37.338437 482 log.go:181] (0xc0007c96b0) Go away received\nI0914 12:05:37.338676 482 log.go:181] (0xc0007c96b0) (0xc0007c0960) Stream removed, broadcasting: 1\nI0914 12:05:37.338694 482 log.go:181] (0xc0007c96b0) (0xc000d36000) Stream removed, broadcasting: 3\nI0914 12:05:37.338703 482 log.go:181] (0xc0007c96b0) (0xc0007c0000) Stream removed, broadcasting: 5\n" Sep 14 12:05:37.342: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:05:37.342: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:05:37.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:05:37.544: INFO: stderr: "I0914 12:05:37.473719 500 log.go:181] (0xc000014840) (0xc0000bdea0) Create stream\nI0914 12:05:37.473775 500 log.go:181] (0xc000014840) (0xc0000bdea0) Stream added, broadcasting: 1\nI0914 12:05:37.476908 500 log.go:181] (0xc000014840) Reply frame received for 1\nI0914 12:05:37.477091 500 log.go:181] (0xc000014840) (0xc0006966e0) Create stream\nI0914 12:05:37.477131 500 log.go:181] (0xc000014840) (0xc0006966e0) Stream added, broadcasting: 3\nI0914 12:05:37.479149 500 log.go:181] (0xc000014840) Reply frame received for 3\nI0914 12:05:37.479178 500 log.go:181] (0xc000014840) (0xc000b12000) Create stream\nI0914 12:05:37.479189 500 log.go:181] (0xc000014840) (0xc000b12000) Stream added, broadcasting: 5\nI0914 12:05:37.480406 500 log.go:181] (0xc000014840) Reply frame received for 5\nI0914 12:05:37.537751 500 log.go:181] (0xc000014840) Data frame received for 3\nI0914 12:05:37.537774 500 log.go:181] (0xc0006966e0) (3) Data frame handling\nI0914 12:05:37.537802 500 log.go:181] (0xc0006966e0) (3) Data frame sent\nI0914 12:05:37.537814 500 log.go:181] (0xc000014840) Data frame received for 3\nI0914 12:05:37.537821 500 log.go:181] (0xc0006966e0) (3) Data frame handling\nI0914 12:05:37.538020 500 log.go:181] (0xc000014840) Data frame received for 5\nI0914 12:05:37.538046 500 log.go:181] (0xc000b12000) (5) Data frame handling\nI0914 12:05:37.538070 500 log.go:181] (0xc000b12000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0914 12:05:37.538090 500 log.go:181] (0xc000014840) Data frame received for 5\nI0914 12:05:37.538127 500 log.go:181] (0xc000b12000) (5) Data frame handling\nI0914 12:05:37.540064 500 log.go:181] (0xc000014840) Data frame received for 1\nI0914 12:05:37.540093 500 log.go:181] (0xc0000bdea0) (1) Data frame handling\nI0914 12:05:37.540126 500 log.go:181] (0xc0000bdea0) (1) Data frame sent\nI0914 12:05:37.540269 500 log.go:181] (0xc000014840) (0xc0000bdea0) Stream removed, broadcasting: 1\nI0914 12:05:37.540332 500 log.go:181] (0xc000014840) Go away received\nI0914 12:05:37.540660 500 log.go:181] (0xc000014840) (0xc0000bdea0) Stream removed, broadcasting: 1\nI0914 12:05:37.540678 500 log.go:181] (0xc000014840) (0xc0006966e0) Stream removed, broadcasting: 3\nI0914 12:05:37.540687 500 log.go:181] (0xc000014840) (0xc000b12000) Stream removed, broadcasting: 5\n" Sep 14 12:05:37.545: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:05:37.545: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:05:37.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:05:37.550: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:05:37.550: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 14 12:05:37.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:05:37.761: INFO: stderr: "I0914 12:05:37.680263 518 log.go:181] (0xc000768000) (0xc000d12280) Create stream\nI0914 12:05:37.680318 518 log.go:181] (0xc000768000) (0xc000d12280) Stream added, broadcasting: 1\nI0914 12:05:37.681952 518 log.go:181] (0xc000768000) Reply frame received for 1\nI0914 12:05:37.681987 518 log.go:181] (0xc000768000) (0xc000516280) Create stream\nI0914 12:05:37.681997 518 log.go:181] (0xc000768000) (0xc000516280) Stream added, broadcasting: 3\nI0914 12:05:37.683009 518 log.go:181] (0xc000768000) Reply frame received for 3\nI0914 12:05:37.683050 518 log.go:181] (0xc000768000) (0xc000376780) Create stream\nI0914 12:05:37.683061 518 log.go:181] (0xc000768000) (0xc000376780) Stream added, broadcasting: 5\nI0914 12:05:37.684086 518 log.go:181] (0xc000768000) Reply frame received for 5\nI0914 12:05:37.755598 518 log.go:181] (0xc000768000) Data frame received for 3\nI0914 12:05:37.755629 518 log.go:181] (0xc000516280) (3) Data frame handling\nI0914 12:05:37.755639 518 log.go:181] (0xc000516280) (3) Data frame sent\nI0914 12:05:37.755644 518 log.go:181] (0xc000768000) Data frame received for 3\nI0914 12:05:37.755650 518 log.go:181] (0xc000516280) (3) Data frame handling\nI0914 12:05:37.755669 518 log.go:181] (0xc000768000) Data frame received for 5\nI0914 12:05:37.755690 518 log.go:181] (0xc000376780) (5) Data frame handling\nI0914 12:05:37.755712 518 log.go:181] (0xc000376780) (5) Data frame sent\nI0914 12:05:37.755727 518 log.go:181] (0xc000768000) Data frame received for 5\nI0914 12:05:37.755737 518 log.go:181] (0xc000376780) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:05:37.757981 518 log.go:181] (0xc000768000) Data frame received for 1\nI0914 12:05:37.757995 518 log.go:181] (0xc000d12280) (1) Data frame handling\nI0914 12:05:37.758003 518 log.go:181] (0xc000d12280) (1) Data frame sent\nI0914 12:05:37.758016 518 log.go:181] (0xc000768000) (0xc000d12280) Stream removed, broadcasting: 1\nI0914 12:05:37.758065 518 log.go:181] (0xc000768000) Go away received\nI0914 12:05:37.758330 518 log.go:181] (0xc000768000) (0xc000d12280) Stream removed, broadcasting: 1\nI0914 12:05:37.758344 518 log.go:181] (0xc000768000) (0xc000516280) Stream removed, broadcasting: 3\nI0914 12:05:37.758352 518 log.go:181] (0xc000768000) (0xc000376780) Stream removed, broadcasting: 5\n" Sep 14 12:05:37.761: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:05:37.761: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:05:37.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:05:38.036: INFO: stderr: "I0914 12:05:37.902882 536 log.go:181] (0xc0008791e0) (0xc000840fa0) Create stream\nI0914 12:05:37.902932 536 log.go:181] (0xc0008791e0) (0xc000840fa0) Stream added, broadcasting: 1\nI0914 12:05:37.905785 536 log.go:181] (0xc0008791e0) Reply frame received for 1\nI0914 12:05:37.905830 536 log.go:181] (0xc0008791e0) (0xc000c9a3c0) Create stream\nI0914 12:05:37.905844 536 log.go:181] (0xc0008791e0) (0xc000c9a3c0) Stream added, broadcasting: 3\nI0914 12:05:37.906939 536 log.go:181] (0xc0008791e0) Reply frame received for 3\nI0914 12:05:37.906975 536 log.go:181] (0xc0008791e0) (0xc000a18140) Create stream\nI0914 12:05:37.906989 536 log.go:181] (0xc0008791e0) (0xc000a18140) Stream added, broadcasting: 5\nI0914 12:05:37.907861 536 log.go:181] (0xc0008791e0) Reply frame received for 5\nI0914 12:05:37.969850 536 log.go:181] (0xc0008791e0) Data frame received for 5\nI0914 12:05:37.969884 536 log.go:181] (0xc000a18140) (5) Data frame handling\nI0914 12:05:37.969905 536 log.go:181] (0xc000a18140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:05:38.029712 536 log.go:181] (0xc0008791e0) Data frame received for 5\nI0914 12:05:38.029733 536 log.go:181] (0xc000a18140) (5) Data frame handling\nI0914 12:05:38.029766 536 log.go:181] (0xc0008791e0) Data frame received for 3\nI0914 12:05:38.029791 536 log.go:181] (0xc000c9a3c0) (3) Data frame handling\nI0914 12:05:38.029812 536 log.go:181] (0xc000c9a3c0) (3) Data frame sent\nI0914 12:05:38.029823 536 log.go:181] (0xc0008791e0) Data frame received for 3\nI0914 12:05:38.029834 536 log.go:181] (0xc000c9a3c0) (3) Data frame handling\nI0914 12:05:38.031452 536 log.go:181] (0xc0008791e0) Data frame received for 1\nI0914 12:05:38.031468 536 log.go:181] (0xc000840fa0) (1) Data frame handling\nI0914 12:05:38.031475 536 log.go:181] (0xc000840fa0) (1) Data frame sent\nI0914 12:05:38.031482 536 log.go:181] (0xc0008791e0) (0xc000840fa0) Stream removed, broadcasting: 1\nI0914 12:05:38.031518 536 log.go:181] (0xc0008791e0) Go away received\nI0914 12:05:38.031775 536 log.go:181] (0xc0008791e0) (0xc000840fa0) Stream removed, broadcasting: 1\nI0914 12:05:38.031787 536 log.go:181] (0xc0008791e0) (0xc000c9a3c0) Stream removed, broadcasting: 3\nI0914 12:05:38.031792 536 log.go:181] (0xc0008791e0) (0xc000a18140) Stream removed, broadcasting: 5\n" Sep 14 12:05:38.036: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:05:38.036: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:05:38.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5323 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:05:38.367: INFO: stderr: "I0914 12:05:38.241624 554 log.go:181] (0xc000840000) (0xc000a1f720) Create stream\nI0914 12:05:38.241691 554 log.go:181] (0xc000840000) (0xc000a1f720) Stream added, broadcasting: 1\nI0914 12:05:38.243764 554 log.go:181] (0xc000840000) Reply frame received for 1\nI0914 12:05:38.243806 554 log.go:181] (0xc000840000) (0xc000b295e0) Create stream\nI0914 12:05:38.243816 554 log.go:181] (0xc000840000) (0xc000b295e0) Stream added, broadcasting: 3\nI0914 12:05:38.244901 554 log.go:181] (0xc000840000) Reply frame received for 3\nI0914 12:05:38.244934 554 log.go:181] (0xc000840000) (0xc000b29d60) Create stream\nI0914 12:05:38.244943 554 log.go:181] (0xc000840000) (0xc000b29d60) Stream added, broadcasting: 5\nI0914 12:05:38.245802 554 log.go:181] (0xc000840000) Reply frame received for 5\nI0914 12:05:38.315648 554 log.go:181] (0xc000840000) Data frame received for 5\nI0914 12:05:38.315676 554 log.go:181] (0xc000b29d60) (5) Data frame handling\nI0914 12:05:38.315689 554 log.go:181] (0xc000b29d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:05:38.361694 554 log.go:181] (0xc000840000) Data frame received for 3\nI0914 12:05:38.361719 554 log.go:181] (0xc000b295e0) (3) Data frame handling\nI0914 12:05:38.361736 554 log.go:181] (0xc000b295e0) (3) Data frame sent\nI0914 12:05:38.361742 554 log.go:181] (0xc000840000) Data frame received for 3\nI0914 12:05:38.361746 554 log.go:181] (0xc000b295e0) (3) Data frame handling\nI0914 12:05:38.362008 554 log.go:181] (0xc000840000) Data frame received for 5\nI0914 12:05:38.362049 554 log.go:181] (0xc000b29d60) (5) Data frame handling\nI0914 12:05:38.363710 554 log.go:181] (0xc000840000) Data frame received for 1\nI0914 12:05:38.363722 554 log.go:181] (0xc000a1f720) (1) Data frame handling\nI0914 12:05:38.363728 554 log.go:181] (0xc000a1f720) (1) Data frame sent\nI0914 12:05:38.363737 554 log.go:181] (0xc000840000) (0xc000a1f720) Stream removed, broadcasting: 1\nI0914 12:05:38.363756 554 log.go:181] (0xc000840000) Go away received\nI0914 12:05:38.364395 554 log.go:181] (0xc000840000) (0xc000a1f720) Stream removed, broadcasting: 1\nI0914 12:05:38.364426 554 log.go:181] (0xc000840000) (0xc000b295e0) Stream removed, broadcasting: 3\nI0914 12:05:38.364443 554 log.go:181] (0xc000840000) (0xc000b29d60) Stream removed, broadcasting: 5\n" Sep 14 12:05:38.367: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:05:38.367: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:05:38.367: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:05:38.371: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Sep 14 12:05:48.422: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:05:48.422: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:05:48.422: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:05:48.783: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:48.783: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:48.783: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:48.783: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:48.783: INFO: Sep 14 12:05:48.783: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 14 12:05:49.864: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:49.864: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:49.864: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:49.864: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:49.864: INFO: Sep 14 12:05:49.864: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 14 12:05:50.869: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:50.869: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:50.869: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:50.869: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:50.869: INFO: Sep 14 12:05:50.869: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 14 12:05:51.874: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:51.874: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:51.875: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:51.875: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:51.875: INFO: Sep 14 12:05:51.875: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 14 12:05:52.880: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:52.880: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:52.880: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:52.880: INFO: Sep 14 12:05:52.880: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 14 12:05:53.885: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:53.885: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:53.885: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:53.886: INFO: Sep 14 12:05:53.886: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 14 12:05:54.891: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:54.891: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:54.891: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:54.891: INFO: Sep 14 12:05:54.891: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 14 12:05:55.896: INFO: POD NODE PHASE GRACE CONDITIONS Sep 14 12:05:55.896: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:06 +0000 UTC }] Sep 14 12:05:55.896: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-14 12:05:26 +0000 UTC }] Sep 14 12:05:55.896: INFO: Sep 14 12:05:55.896: INFO: StatefulSet ss has not reached scale 0, at 2 Sep 14 12:05:56.900: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.53248451s Sep 14 12:05:57.905: INFO: Verifying statefulset ss doesn't scale past 0 for another 528.639273ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5323 Sep 14 12:05:58.909: INFO: Scaling statefulset ss to 0 Sep 14 12:05:58.919: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 12:05:58.921: INFO: Deleting all statefulset in ns statefulset-5323 Sep 14 12:05:58.924: INFO: Scaling statefulset ss to 0 Sep 14 12:05:58.932: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:05:58.934: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:05:58.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5323" for this suite. • [SLOW TEST:52.932 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":39,"skipped":552,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:05:58.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:05:59.033: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09" in namespace "downward-api-6702" to be "Succeeded or Failed" Sep 14 12:05:59.055: INFO: Pod "downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09": Phase="Pending", Reason="", readiness=false. Elapsed: 22.648584ms Sep 14 12:06:01.068: INFO: Pod "downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034818165s Sep 14 12:06:03.071: INFO: Pod "downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03870386s STEP: Saw pod success Sep 14 12:06:03.072: INFO: Pod "downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09" satisfied condition "Succeeded or Failed" Sep 14 12:06:03.075: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09 container client-container: STEP: delete the pod Sep 14 12:06:03.130: INFO: Waiting for pod downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09 to disappear Sep 14 12:06:03.139: INFO: Pod downwardapi-volume-f252cd40-f81d-4f73-8308-155d2e892a09 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:03.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6702" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:03.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-80b21ba0-b4e0-4f0a-ae45-918c5f34663e STEP: Creating a pod to test consume secrets Sep 14 12:06:03.261: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1" in namespace "projected-640" to be "Succeeded or Failed" Sep 14 12:06:03.283: INFO: Pod "pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.666591ms Sep 14 12:06:05.288: INFO: Pod "pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026932634s Sep 14 12:06:07.292: INFO: Pod "pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031221028s STEP: Saw pod success Sep 14 12:06:07.292: INFO: Pod "pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1" satisfied condition "Succeeded or Failed" Sep 14 12:06:07.295: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1 container projected-secret-volume-test: STEP: delete the pod Sep 14 12:06:07.328: INFO: Waiting for pod pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1 to disappear Sep 14 12:06:07.338: INFO: Pod pod-projected-secrets-902b73a1-9e4f-4a2c-b280-2edb74d1a2d1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:07.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-640" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:07.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 14 12:06:07.442: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 14 12:06:07.457: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 14 12:06:07.458: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 14 12:06:07.631: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 14 12:06:07.631: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 14 12:06:07.823: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 14 12:06:07.823: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 14 12:06:15.018: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:15.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-297" for this suite. • [SLOW TEST:7.732 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":42,"skipped":628,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:15.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4fa9f452-c566-4ff5-a63a-673c1e00f722 STEP: Creating a pod to test consume secrets Sep 14 12:06:15.183: INFO: Waiting up to 5m0s for pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243" in namespace "secrets-7151" to be "Succeeded or Failed" Sep 14 12:06:15.187: INFO: Pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30626ms Sep 14 12:06:17.192: INFO: Pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008786768s Sep 14 12:06:19.196: INFO: Pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243": Phase="Running", Reason="", readiness=true. Elapsed: 4.013232209s Sep 14 12:06:21.370: INFO: Pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186684699s STEP: Saw pod success Sep 14 12:06:21.370: INFO: Pod "pod-secrets-2e969359-35c5-46b6-821c-160a62f60243" satisfied condition "Succeeded or Failed" Sep 14 12:06:21.373: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2e969359-35c5-46b6-821c-160a62f60243 container secret-volume-test: STEP: delete the pod Sep 14 12:06:21.819: INFO: Waiting for pod pod-secrets-2e969359-35c5-46b6-821c-160a62f60243 to disappear Sep 14 12:06:21.978: INFO: Pod pod-secrets-2e969359-35c5-46b6-821c-160a62f60243 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:21.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7151" for this suite. • [SLOW TEST:6.887 seconds] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":648,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:21.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-tb48 STEP: Creating a pod to test atomic-volume-subpath Sep 14 12:06:22.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tb48" in namespace "subpath-4123" to be "Succeeded or Failed" Sep 14 12:06:22.132: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006853ms Sep 14 12:06:24.356: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227258092s Sep 14 12:06:26.360: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 4.230767514s Sep 14 12:06:28.367: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 6.238300081s Sep 14 12:06:30.371: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 8.242266205s Sep 14 12:06:32.403: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 10.274256015s Sep 14 12:06:34.408: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 12.278772725s Sep 14 12:06:36.411: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 14.28223326s Sep 14 12:06:38.415: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 16.286010688s Sep 14 12:06:40.419: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 18.290346393s Sep 14 12:06:42.423: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 20.293897546s Sep 14 12:06:44.433: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Running", Reason="", readiness=true. Elapsed: 22.304338643s Sep 14 12:06:46.452: INFO: Pod "pod-subpath-test-secret-tb48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.322423272s STEP: Saw pod success Sep 14 12:06:46.452: INFO: Pod "pod-subpath-test-secret-tb48" satisfied condition "Succeeded or Failed" Sep 14 12:06:46.454: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-tb48 container test-container-subpath-secret-tb48: STEP: delete the pod Sep 14 12:06:46.488: INFO: Waiting for pod pod-subpath-test-secret-tb48 to disappear Sep 14 12:06:46.495: INFO: Pod pod-subpath-test-secret-tb48 no longer exists STEP: Deleting pod pod-subpath-test-secret-tb48 Sep 14 12:06:46.495: INFO: Deleting pod "pod-subpath-test-secret-tb48" in namespace "subpath-4123" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4123" for this suite. • [SLOW TEST:24.519 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":44,"skipped":650,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:46.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:06:46.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:06:48.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682006, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682006, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682007, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682006, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:06:52.017: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:06:52.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2249" for this suite. STEP: Destroying namespace "webhook-2249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.693 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":45,"skipped":661,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:06:52.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7976 Sep 14 12:06:56.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 14 12:06:59.997: INFO: stderr: "I0914 12:06:59.905194 572 log.go:181] (0xc000cba000) (0xc000c660a0) Create stream\nI0914 12:06:59.905262 572 log.go:181] (0xc000cba000) (0xc000c660a0) Stream added, broadcasting: 1\nI0914 12:06:59.906950 572 log.go:181] (0xc000cba000) Reply frame received for 1\nI0914 12:06:59.906993 572 log.go:181] (0xc000cba000) (0xc000d86000) Create stream\nI0914 12:06:59.907009 572 log.go:181] (0xc000cba000) (0xc000d86000) Stream added, broadcasting: 3\nI0914 12:06:59.907958 572 log.go:181] (0xc000cba000) Reply frame received for 3\nI0914 12:06:59.907999 572 log.go:181] (0xc000cba000) (0xc000d0c1e0) Create stream\nI0914 12:06:59.908015 572 log.go:181] (0xc000cba000) (0xc000d0c1e0) Stream added, broadcasting: 5\nI0914 12:06:59.909148 572 log.go:181] (0xc000cba000) Reply frame received for 5\nI0914 12:06:59.988856 572 log.go:181] (0xc000cba000) Data frame received for 5\nI0914 12:06:59.988875 572 log.go:181] (0xc000d0c1e0) (5) Data frame handling\nI0914 12:06:59.988886 572 log.go:181] (0xc000d0c1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0914 12:06:59.991346 572 log.go:181] (0xc000cba000) Data frame received for 3\nI0914 12:06:59.991385 572 log.go:181] (0xc000d86000) (3) Data frame handling\nI0914 12:06:59.991407 572 log.go:181] (0xc000d86000) (3) Data frame sent\nI0914 12:06:59.991910 572 log.go:181] (0xc000cba000) Data frame received for 3\nI0914 12:06:59.991930 572 log.go:181] (0xc000d86000) (3) Data frame handling\nI0914 12:06:59.992096 572 log.go:181] (0xc000cba000) Data frame received for 5\nI0914 12:06:59.992110 572 log.go:181] (0xc000d0c1e0) (5) Data frame handling\nI0914 12:06:59.993801 572 log.go:181] (0xc000cba000) Data frame received for 1\nI0914 12:06:59.993833 572 log.go:181] (0xc000c660a0) (1) Data frame handling\nI0914 12:06:59.993856 572 log.go:181] (0xc000c660a0) (1) Data frame sent\nI0914 12:06:59.993883 572 log.go:181] (0xc000cba000) (0xc000c660a0) Stream removed, broadcasting: 1\nI0914 12:06:59.993913 572 log.go:181] (0xc000cba000) Go away received\nI0914 12:06:59.994182 572 log.go:181] (0xc000cba000) (0xc000c660a0) Stream removed, broadcasting: 1\nI0914 12:06:59.994198 572 log.go:181] (0xc000cba000) (0xc000d86000) Stream removed, broadcasting: 3\nI0914 12:06:59.994204 572 log.go:181] (0xc000cba000) (0xc000d0c1e0) Stream removed, broadcasting: 5\n" Sep 14 12:06:59.997: INFO: stdout: "iptables" Sep 14 12:06:59.997: INFO: proxyMode: iptables Sep 14 12:07:00.002: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 12:07:00.027: INFO: Pod kube-proxy-mode-detector still exists Sep 14 12:07:02.028: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 12:07:02.722: INFO: Pod kube-proxy-mode-detector still exists Sep 14 12:07:04.028: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 12:07:04.033: INFO: Pod kube-proxy-mode-detector still exists Sep 14 12:07:06.028: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 12:07:06.031: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7976 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7976 I0914 12:07:06.078188 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7976, replica count: 3 I0914 12:07:09.128530 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:07:12.128856 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 12:07:12.140: INFO: Creating new exec pod Sep 14 12:07:17.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 14 12:07:17.427: INFO: stderr: "I0914 12:07:17.350347 590 log.go:181] (0xc0007ac000) (0xc00087e820) Create stream\nI0914 12:07:17.350413 590 log.go:181] (0xc0007ac000) (0xc00087e820) Stream added, broadcasting: 1\nI0914 12:07:17.352759 590 log.go:181] (0xc0007ac000) Reply frame received for 1\nI0914 12:07:17.352834 590 log.go:181] (0xc0007ac000) (0xc000664000) Create stream\nI0914 12:07:17.352851 590 log.go:181] (0xc0007ac000) (0xc000664000) Stream added, broadcasting: 3\nI0914 12:07:17.353872 590 log.go:181] (0xc0007ac000) Reply frame received for 3\nI0914 12:07:17.353939 590 log.go:181] (0xc0007ac000) (0xc000b600a0) Create stream\nI0914 12:07:17.353958 590 log.go:181] (0xc0007ac000) (0xc000b600a0) Stream added, broadcasting: 5\nI0914 12:07:17.354879 590 log.go:181] (0xc0007ac000) Reply frame received for 5\nI0914 12:07:17.422722 590 log.go:181] (0xc0007ac000) Data frame received for 3\nI0914 12:07:17.422758 590 log.go:181] (0xc000664000) (3) Data frame handling\nI0914 12:07:17.422792 590 log.go:181] (0xc0007ac000) Data frame received for 5\nI0914 12:07:17.422814 590 log.go:181] (0xc000b600a0) (5) Data frame handling\nI0914 12:07:17.422832 590 log.go:181] (0xc000b600a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0914 12:07:17.422844 590 log.go:181] (0xc0007ac000) Data frame received for 5\nI0914 12:07:17.422873 590 log.go:181] (0xc000b600a0) (5) Data frame handling\nI0914 12:07:17.424104 590 log.go:181] (0xc0007ac000) Data frame received for 1\nI0914 12:07:17.424238 590 log.go:181] (0xc00087e820) (1) Data frame handling\nI0914 12:07:17.424296 590 log.go:181] (0xc00087e820) (1) Data frame sent\nI0914 12:07:17.424346 590 log.go:181] (0xc0007ac000) (0xc00087e820) Stream removed, broadcasting: 1\nI0914 12:07:17.424395 590 log.go:181] (0xc0007ac000) Go away received\nI0914 12:07:17.424650 590 log.go:181] (0xc0007ac000) (0xc00087e820) Stream removed, broadcasting: 1\nI0914 12:07:17.424665 590 log.go:181] (0xc0007ac000) (0xc000664000) Stream removed, broadcasting: 3\nI0914 12:07:17.424671 590 log.go:181] (0xc0007ac000) (0xc000b600a0) Stream removed, broadcasting: 5\n" Sep 14 12:07:17.427: INFO: stdout: "" Sep 14 12:07:17.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c nc -zv -t -w 2 10.96.54.155 80' Sep 14 12:07:17.642: INFO: stderr: "I0914 12:07:17.554709 608 log.go:181] (0xc000142370) (0xc0003d7cc0) Create stream\nI0914 12:07:17.554810 608 log.go:181] (0xc000142370) (0xc0003d7cc0) Stream added, broadcasting: 1\nI0914 12:07:17.557118 608 log.go:181] (0xc000142370) Reply frame received for 1\nI0914 12:07:17.557193 608 log.go:181] (0xc000142370) (0xc000e8e1e0) Create stream\nI0914 12:07:17.557229 608 log.go:181] (0xc000142370) (0xc000e8e1e0) Stream added, broadcasting: 3\nI0914 12:07:17.558191 608 log.go:181] (0xc000142370) Reply frame received for 3\nI0914 12:07:17.558238 608 log.go:181] (0xc000142370) (0xc0008120a0) Create stream\nI0914 12:07:17.558252 608 log.go:181] (0xc000142370) (0xc0008120a0) Stream added, broadcasting: 5\nI0914 12:07:17.559035 608 log.go:181] (0xc000142370) Reply frame received for 5\nI0914 12:07:17.632901 608 log.go:181] (0xc000142370) Data frame received for 5\nI0914 12:07:17.632936 608 log.go:181] (0xc0008120a0) (5) Data frame handling\nI0914 12:07:17.632950 608 log.go:181] (0xc0008120a0) (5) Data frame sent\nI0914 12:07:17.632958 608 log.go:181] (0xc000142370) Data frame received for 5\nI0914 12:07:17.632964 608 log.go:181] (0xc0008120a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.54.155 80\nConnection to 10.96.54.155 80 port [tcp/http] succeeded!\nI0914 12:07:17.632987 608 log.go:181] (0xc000142370) Data frame received for 3\nI0914 12:07:17.632994 608 log.go:181] (0xc000e8e1e0) (3) Data frame handling\nI0914 12:07:17.639042 608 log.go:181] (0xc000142370) Data frame received for 1\nI0914 12:07:17.639078 608 log.go:181] (0xc0003d7cc0) (1) Data frame handling\nI0914 12:07:17.639102 608 log.go:181] (0xc0003d7cc0) (1) Data frame sent\nI0914 12:07:17.639119 608 log.go:181] (0xc000142370) (0xc0003d7cc0) Stream removed, broadcasting: 1\nI0914 12:07:17.639155 608 log.go:181] (0xc000142370) Go away received\nI0914 12:07:17.639533 608 log.go:181] (0xc000142370) (0xc0003d7cc0) Stream removed, broadcasting: 1\nI0914 12:07:17.639549 608 log.go:181] (0xc000142370) (0xc000e8e1e0) Stream removed, broadcasting: 3\nI0914 12:07:17.639557 608 log.go:181] (0xc000142370) (0xc0008120a0) Stream removed, broadcasting: 5\n" Sep 14 12:07:17.643: INFO: stdout: "" Sep 14 12:07:17.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31598' Sep 14 12:07:17.854: INFO: stderr: "I0914 12:07:17.777475 626 log.go:181] (0xc000e48fd0) (0xc000e20960) Create stream\nI0914 12:07:17.777532 626 log.go:181] (0xc000e48fd0) (0xc000e20960) Stream added, broadcasting: 1\nI0914 12:07:17.783265 626 log.go:181] (0xc000e48fd0) Reply frame received for 1\nI0914 12:07:17.783319 626 log.go:181] (0xc000e48fd0) (0xc000e20000) Create stream\nI0914 12:07:17.783335 626 log.go:181] (0xc000e48fd0) (0xc000e20000) Stream added, broadcasting: 3\nI0914 12:07:17.784612 626 log.go:181] (0xc000e48fd0) Reply frame received for 3\nI0914 12:07:17.784655 626 log.go:181] (0xc000e48fd0) (0xc0001a3ea0) Create stream\nI0914 12:07:17.784669 626 log.go:181] (0xc000e48fd0) (0xc0001a3ea0) Stream added, broadcasting: 5\nI0914 12:07:17.785612 626 log.go:181] (0xc000e48fd0) Reply frame received for 5\nI0914 12:07:17.849178 626 log.go:181] (0xc000e48fd0) Data frame received for 3\nI0914 12:07:17.849205 626 log.go:181] (0xc000e20000) (3) Data frame handling\nI0914 12:07:17.849240 626 log.go:181] (0xc000e48fd0) Data frame received for 5\nI0914 12:07:17.849251 626 log.go:181] (0xc0001a3ea0) (5) Data frame handling\nI0914 12:07:17.849262 626 log.go:181] (0xc0001a3ea0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31598\nConnection to 172.18.0.15 31598 port [tcp/31598] succeeded!\nI0914 12:07:17.849383 626 log.go:181] (0xc000e48fd0) Data frame received for 5\nI0914 12:07:17.849407 626 log.go:181] (0xc0001a3ea0) (5) Data frame handling\nI0914 12:07:17.851352 626 log.go:181] (0xc000e48fd0) Data frame received for 1\nI0914 12:07:17.851369 626 log.go:181] (0xc000e20960) (1) Data frame handling\nI0914 12:07:17.851379 626 log.go:181] (0xc000e20960) (1) Data frame sent\nI0914 12:07:17.851389 626 log.go:181] (0xc000e48fd0) (0xc000e20960) Stream removed, broadcasting: 1\nI0914 12:07:17.851649 626 log.go:181] (0xc000e48fd0) Go away received\nI0914 12:07:17.851694 626 log.go:181] (0xc000e48fd0) (0xc000e20960) Stream removed, broadcasting: 1\nI0914 12:07:17.851705 626 log.go:181] (0xc000e48fd0) (0xc000e20000) Stream removed, broadcasting: 3\nI0914 12:07:17.851711 626 log.go:181] (0xc000e48fd0) (0xc0001a3ea0) Stream removed, broadcasting: 5\n" Sep 14 12:07:17.854: INFO: stdout: "" Sep 14 12:07:17.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31598' Sep 14 12:07:18.052: INFO: stderr: "I0914 12:07:17.980007 644 log.go:181] (0xc000f18f20) (0xc000f96500) Create stream\nI0914 12:07:17.980058 644 log.go:181] (0xc000f18f20) (0xc000f96500) Stream added, broadcasting: 1\nI0914 12:07:17.985392 644 log.go:181] (0xc000f18f20) Reply frame received for 1\nI0914 12:07:17.985435 644 log.go:181] (0xc000f18f20) (0xc000d90000) Create stream\nI0914 12:07:17.985448 644 log.go:181] (0xc000f18f20) (0xc000d90000) Stream added, broadcasting: 3\nI0914 12:07:17.986291 644 log.go:181] (0xc000f18f20) Reply frame received for 3\nI0914 12:07:17.986332 644 log.go:181] (0xc000f18f20) (0xc000aa0000) Create stream\nI0914 12:07:17.986358 644 log.go:181] (0xc000f18f20) (0xc000aa0000) Stream added, broadcasting: 5\nI0914 12:07:17.987335 644 log.go:181] (0xc000f18f20) Reply frame received for 5\nI0914 12:07:18.046604 644 log.go:181] (0xc000f18f20) Data frame received for 3\nI0914 12:07:18.046658 644 log.go:181] (0xc000d90000) (3) Data frame handling\nI0914 12:07:18.046696 644 log.go:181] (0xc000f18f20) Data frame received for 5\nI0914 12:07:18.046716 644 log.go:181] (0xc000aa0000) (5) Data frame handling\nI0914 12:07:18.046746 644 log.go:181] (0xc000aa0000) (5) Data frame sent\nI0914 12:07:18.046766 644 log.go:181] (0xc000f18f20) Data frame received for 5\nI0914 12:07:18.046783 644 log.go:181] (0xc000aa0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 31598\nConnection to 172.18.0.16 31598 port [tcp/31598] succeeded!\nI0914 12:07:18.048813 644 log.go:181] (0xc000f18f20) Data frame received for 1\nI0914 12:07:18.048861 644 log.go:181] (0xc000f96500) (1) Data frame handling\nI0914 12:07:18.048894 644 log.go:181] (0xc000f96500) (1) Data frame sent\nI0914 12:07:18.048925 644 log.go:181] (0xc000f18f20) (0xc000f96500) Stream removed, broadcasting: 1\nI0914 12:07:18.048947 644 log.go:181] (0xc000f18f20) Go away received\nI0914 12:07:18.049359 644 log.go:181] (0xc000f18f20) (0xc000f96500) Stream removed, broadcasting: 1\nI0914 12:07:18.049391 644 log.go:181] (0xc000f18f20) (0xc000d90000) Stream removed, broadcasting: 3\nI0914 12:07:18.049404 644 log.go:181] (0xc000f18f20) (0xc000aa0000) Stream removed, broadcasting: 5\n" Sep 14 12:07:18.052: INFO: stdout: "" Sep 14 12:07:18.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31598/ ; done' Sep 14 12:07:18.389: INFO: stderr: "I0914 12:07:18.192131 662 log.go:181] (0xc0007b91e0) (0xc00083ea00) Create stream\nI0914 12:07:18.192244 662 log.go:181] (0xc0007b91e0) (0xc00083ea00) Stream added, broadcasting: 1\nI0914 12:07:18.199576 662 log.go:181] (0xc0007b91e0) Reply frame received for 1\nI0914 12:07:18.199645 662 log.go:181] (0xc0007b91e0) (0xc000311360) Create stream\nI0914 12:07:18.199660 662 log.go:181] (0xc0007b91e0) (0xc000311360) Stream added, broadcasting: 3\nI0914 12:07:18.200651 662 log.go:181] (0xc0007b91e0) Reply frame received for 3\nI0914 12:07:18.200696 662 log.go:181] (0xc0007b91e0) (0xc00083e000) Create stream\nI0914 12:07:18.200714 662 log.go:181] (0xc0007b91e0) (0xc00083e000) Stream added, broadcasting: 5\nI0914 12:07:18.201639 662 log.go:181] (0xc0007b91e0) Reply frame received for 5\nI0914 12:07:18.277630 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.277672 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.277685 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.277710 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.277719 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.277729 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.283132 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.283161 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.283188 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.283953 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.283976 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.284007 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.284052 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.284081 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.284114 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.291457 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.291490 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.291516 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.291985 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.292007 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.292024 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.292046 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.292064 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.292081 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.300949 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.300963 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.300982 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.301020 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.301034 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.301053 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.301063 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.301073 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.301083 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.306434 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.306450 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.306461 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.307398 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.307416 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.307424 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.307435 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.307441 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.307447 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.307453 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.307458 662 log.go:181] (0xc00083e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.307471 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.311482 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.311499 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.311512 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.312095 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.312112 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.312119 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.312227 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.312266 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.312292 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.316010 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.316041 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.316053 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.316987 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.317010 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.317034 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.317043 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.317056 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.317104 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.322024 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.322067 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.322103 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.322588 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.322669 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.322690 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.322708 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.322725 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.322743 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.326999 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.327027 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.327045 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.327811 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.327827 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.327835 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.327905 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.327929 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.327947 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.334992 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.335026 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.335050 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.335747 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.335770 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.335780 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.335808 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.335820 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.335829 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.339894 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.339932 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.339971 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.340791 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.340833 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.340850 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.340879 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.340890 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.340901 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.346310 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.346344 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.346369 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.346951 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.347006 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.347031 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.347060 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.347077 662 log.go:181] (0xc00083e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.347098 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.347128 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.347162 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.347184 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.354528 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.354565 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.354602 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.355023 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.355039 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.355045 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.355053 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.355057 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.355062 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.355066 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.355070 662 log.go:181] (0xc00083e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.355078 662 log.go:181] (0xc00083e000) (5) Data frame sent\nI0914 12:07:18.362090 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.362114 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.362133 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.362896 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.362915 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.362921 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.362929 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.362934 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.362939 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.369489 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.369514 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.369532 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.369734 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.369757 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.369778 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.369836 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.369852 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.369861 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.377152 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.377176 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.377194 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.377918 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.377967 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.377994 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.378026 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.378045 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.378073 662 log.go:181] (0xc00083e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:18.383899 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.383944 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.383983 662 log.go:181] (0xc000311360) (3) Data frame sent\nI0914 12:07:18.384616 662 log.go:181] (0xc0007b91e0) Data frame received for 5\nI0914 12:07:18.384640 662 log.go:181] (0xc00083e000) (5) Data frame handling\nI0914 12:07:18.384753 662 log.go:181] (0xc0007b91e0) Data frame received for 3\nI0914 12:07:18.384782 662 log.go:181] (0xc000311360) (3) Data frame handling\nI0914 12:07:18.386695 662 log.go:181] (0xc0007b91e0) Data frame received for 1\nI0914 12:07:18.386714 662 log.go:181] (0xc00083ea00) (1) Data frame handling\nI0914 12:07:18.386721 662 log.go:181] (0xc00083ea00) (1) Data frame sent\nI0914 12:07:18.386730 662 log.go:181] (0xc0007b91e0) (0xc00083ea00) Stream removed, broadcasting: 1\nI0914 12:07:18.386789 662 log.go:181] (0xc0007b91e0) Go away received\nI0914 12:07:18.387014 662 log.go:181] (0xc0007b91e0) (0xc00083ea00) Stream removed, broadcasting: 1\nI0914 12:07:18.387026 662 log.go:181] (0xc0007b91e0) (0xc000311360) Stream removed, broadcasting: 3\nI0914 12:07:18.387031 662 log.go:181] (0xc0007b91e0) (0xc00083e000) Stream removed, broadcasting: 5\n" Sep 14 12:07:18.390: INFO: stdout: "\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l\naffinity-nodeport-timeout-fkb6l" Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Received response from host: affinity-nodeport-timeout-fkb6l Sep 14 12:07:18.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31598/' Sep 14 12:07:20.078: INFO: stderr: "I0914 12:07:19.983403 680 log.go:181] (0xc00063f6b0) (0xc000636a00) Create stream\nI0914 12:07:19.983451 680 log.go:181] (0xc00063f6b0) (0xc000636a00) Stream added, broadcasting: 1\nI0914 12:07:19.986638 680 log.go:181] (0xc00063f6b0) Reply frame received for 1\nI0914 12:07:19.986673 680 log.go:181] (0xc00063f6b0) (0xc000c8e0a0) Create stream\nI0914 12:07:19.986682 680 log.go:181] (0xc00063f6b0) (0xc000c8e0a0) Stream added, broadcasting: 3\nI0914 12:07:19.987297 680 log.go:181] (0xc00063f6b0) Reply frame received for 3\nI0914 12:07:19.987317 680 log.go:181] (0xc00063f6b0) (0xc000636000) Create stream\nI0914 12:07:19.987323 680 log.go:181] (0xc00063f6b0) (0xc000636000) Stream added, broadcasting: 5\nI0914 12:07:19.988117 680 log.go:181] (0xc00063f6b0) Reply frame received for 5\nI0914 12:07:20.066281 680 log.go:181] (0xc00063f6b0) Data frame received for 5\nI0914 12:07:20.066314 680 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 12:07:20.066335 680 log.go:181] (0xc000636000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:20.071658 680 log.go:181] (0xc00063f6b0) Data frame received for 3\nI0914 12:07:20.071670 680 log.go:181] (0xc000c8e0a0) (3) Data frame handling\nI0914 12:07:20.071677 680 log.go:181] (0xc000c8e0a0) (3) Data frame sent\nI0914 12:07:20.072787 680 log.go:181] (0xc00063f6b0) Data frame received for 5\nI0914 12:07:20.072844 680 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 12:07:20.072883 680 log.go:181] (0xc00063f6b0) Data frame received for 3\nI0914 12:07:20.072900 680 log.go:181] (0xc000c8e0a0) (3) Data frame handling\nI0914 12:07:20.074227 680 log.go:181] (0xc00063f6b0) Data frame received for 1\nI0914 12:07:20.074251 680 log.go:181] (0xc000636a00) (1) Data frame handling\nI0914 12:07:20.074289 680 log.go:181] (0xc000636a00) (1) Data frame sent\nI0914 12:07:20.074329 680 log.go:181] (0xc00063f6b0) (0xc000636a00) Stream removed, broadcasting: 1\nI0914 12:07:20.074350 680 log.go:181] (0xc00063f6b0) Go away received\nI0914 12:07:20.074819 680 log.go:181] (0xc00063f6b0) (0xc000636a00) Stream removed, broadcasting: 1\nI0914 12:07:20.074838 680 log.go:181] (0xc00063f6b0) (0xc000c8e0a0) Stream removed, broadcasting: 3\nI0914 12:07:20.074848 680 log.go:181] (0xc00063f6b0) (0xc000636000) Stream removed, broadcasting: 5\n" Sep 14 12:07:20.078: INFO: stdout: "affinity-nodeport-timeout-fkb6l" Sep 14 12:07:35.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7976 execpod-affinity8m565 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31598/' Sep 14 12:07:35.315: INFO: stderr: "I0914 12:07:35.218714 698 log.go:181] (0xc00003b340) (0xc000d04500) Create stream\nI0914 12:07:35.218786 698 log.go:181] (0xc00003b340) (0xc000d04500) Stream added, broadcasting: 1\nI0914 12:07:35.224570 698 log.go:181] (0xc00003b340) Reply frame received for 1\nI0914 12:07:35.224639 698 log.go:181] (0xc00003b340) (0xc000d04000) Create stream\nI0914 12:07:35.224666 698 log.go:181] (0xc00003b340) (0xc000d04000) Stream added, broadcasting: 3\nI0914 12:07:35.225620 698 log.go:181] (0xc00003b340) Reply frame received for 3\nI0914 12:07:35.225656 698 log.go:181] (0xc00003b340) (0xc00087e320) Create stream\nI0914 12:07:35.225669 698 log.go:181] (0xc00003b340) (0xc00087e320) Stream added, broadcasting: 5\nI0914 12:07:35.226571 698 log.go:181] (0xc00003b340) Reply frame received for 5\nI0914 12:07:35.303769 698 log.go:181] (0xc00003b340) Data frame received for 5\nI0914 12:07:35.303803 698 log.go:181] (0xc00087e320) (5) Data frame handling\nI0914 12:07:35.303833 698 log.go:181] (0xc00087e320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31598/\nI0914 12:07:35.309202 698 log.go:181] (0xc00003b340) Data frame received for 3\nI0914 12:07:35.309218 698 log.go:181] (0xc000d04000) (3) Data frame handling\nI0914 12:07:35.309229 698 log.go:181] (0xc000d04000) (3) Data frame sent\nI0914 12:07:35.309530 698 log.go:181] (0xc00003b340) Data frame received for 5\nI0914 12:07:35.309551 698 log.go:181] (0xc00087e320) (5) Data frame handling\nI0914 12:07:35.309567 698 log.go:181] (0xc00003b340) Data frame received for 3\nI0914 12:07:35.309576 698 log.go:181] (0xc000d04000) (3) Data frame handling\nI0914 12:07:35.310984 698 log.go:181] (0xc00003b340) Data frame received for 1\nI0914 12:07:35.311002 698 log.go:181] (0xc000d04500) (1) Data frame handling\nI0914 12:07:35.311012 698 log.go:181] (0xc000d04500) (1) Data frame sent\nI0914 12:07:35.311026 698 log.go:181] (0xc00003b340) (0xc000d04500) Stream removed, broadcasting: 1\nI0914 12:07:35.311047 698 log.go:181] (0xc00003b340) Go away received\nI0914 12:07:35.311340 698 log.go:181] (0xc00003b340) (0xc000d04500) Stream removed, broadcasting: 1\nI0914 12:07:35.311355 698 log.go:181] (0xc00003b340) (0xc000d04000) Stream removed, broadcasting: 3\nI0914 12:07:35.311361 698 log.go:181] (0xc00003b340) (0xc00087e320) Stream removed, broadcasting: 5\n" Sep 14 12:07:35.315: INFO: stdout: "affinity-nodeport-timeout-h7lht" Sep 14 12:07:35.315: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7976, will wait for the garbage collector to delete the pods Sep 14 12:07:35.649: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 159.198234ms Sep 14 12:07:36.150: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.298256ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:07:46.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7976" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:53.954 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":46,"skipped":675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:07:46.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 14 12:07:46.264: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4624 /api/v1/namespaces/watch-4624/configmaps/e2e-watch-test-watch-closed 125b1b76-8f6e-4587-a6ec-778a50787218 256856 0 2020-09-14 12:07:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-14 12:07:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:07:46.264: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4624 /api/v1/namespaces/watch-4624/configmaps/e2e-watch-test-watch-closed 125b1b76-8f6e-4587-a6ec-778a50787218 256857 0 2020-09-14 12:07:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-14 12:07:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 14 12:07:46.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4624 /api/v1/namespaces/watch-4624/configmaps/e2e-watch-test-watch-closed 125b1b76-8f6e-4587-a6ec-778a50787218 256858 0 2020-09-14 12:07:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-14 12:07:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:07:46.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4624 /api/v1/namespaces/watch-4624/configmaps/e2e-watch-test-watch-closed 125b1b76-8f6e-4587-a6ec-778a50787218 256859 0 2020-09-14 12:07:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-14 12:07:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:07:46.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4624" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":47,"skipped":701,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:07:46.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 14 12:07:46.403: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 14 12:07:46.417: INFO: Waiting for terminating namespaces to be deleted... Sep 14 12:07:46.420: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 14 12:07:46.426: INFO: coredns-f9fd979d6-rckh5 from kube-system started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.426: INFO: Container coredns ready: true, restart count 0 Sep 14 12:07:46.426: INFO: coredns-f9fd979d6-rtr7c from kube-system started at 2020-09-13 17:00:07 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.426: INFO: Container coredns ready: true, restart count 0 Sep 14 12:07:46.426: INFO: kindnet-x9kfh from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.426: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:07:46.426: INFO: kube-proxy-484ff from kube-system started at 2020-09-13 16:59:36 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.426: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:07:46.426: INFO: local-path-provisioner-78776bfc44-ks8gr from local-path-storage started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.426: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 14 12:07:46.426: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 14 12:07:46.431: INFO: kindnet-6mthj from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.431: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:07:46.431: INFO: kube-proxy-thrnr from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:07:46.431: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Sep 14 12:07:46.531: INFO: Pod coredns-f9fd979d6-rckh5 requesting resource cpu=100m on Node latest-worker Sep 14 12:07:46.531: INFO: Pod coredns-f9fd979d6-rtr7c requesting resource cpu=100m on Node latest-worker Sep 14 12:07:46.531: INFO: Pod kindnet-6mthj requesting resource cpu=100m on Node latest-worker2 Sep 14 12:07:46.531: INFO: Pod kindnet-x9kfh requesting resource cpu=100m on Node latest-worker Sep 14 12:07:46.531: INFO: Pod kube-proxy-484ff requesting resource cpu=0m on Node latest-worker Sep 14 12:07:46.531: INFO: Pod kube-proxy-thrnr requesting resource cpu=0m on Node latest-worker2 Sep 14 12:07:46.531: INFO: Pod local-path-provisioner-78776bfc44-ks8gr requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Sep 14 12:07:46.531: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker Sep 14 12:07:46.539: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6.1634a512d2e7601a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6.1634a51328d760e3], Reason = [Created], Message = [Created container filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6.1634a5133a647c38], Reason = [Started], Message = [Started container filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6] STEP: Considering event: Type = [Normal], Name = [filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e.1634a51382888507], Reason = [Created], Message = [Created container filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e] STEP: Considering event: Type = [Normal], Name = [filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e.1634a5130a275b73], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6.1634a51283092c59], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5966/filler-pod-b58c9634-d341-49d3-8acb-77f630cdd8f6 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e.1634a512879fee41], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5966/filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e.1634a5138fc8450e], Reason = [Started], Message = [Started container filler-pod-6b19c339-c07b-4c9f-b19b-bfd5b3a7665e] STEP: Considering event: Type = [Warning], Name = [additional-pod.1634a513f0b417e6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:07:53.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5966" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.421 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":48,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:07:53.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:07:53.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9930" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":49,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:07:53.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-213 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-213 to expose endpoints map[] Sep 14 12:07:54.080: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Sep 14 12:07:55.134: INFO: successfully validated that service endpoint-test2 in namespace services-213 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-213 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-213 to expose endpoints map[pod1:[80]] Sep 14 12:07:58.221: INFO: successfully validated that service endpoint-test2 in namespace services-213 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-213 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-213 to expose endpoints map[pod1:[80] pod2:[80]] Sep 14 12:08:02.352: INFO: successfully validated that service endpoint-test2 in namespace services-213 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-213 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-213 to expose endpoints map[pod2:[80]] Sep 14 12:08:02.389: INFO: successfully validated that service endpoint-test2 in namespace services-213 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-213 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-213 to expose endpoints map[] Sep 14 12:08:03.427: INFO: successfully validated that service endpoint-test2 in namespace services-213 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:08:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-213" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.544 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":50,"skipped":744,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:08:03.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-390396d1-464f-4a79-b3e3-2794f36372d6 in namespace container-probe-3632 Sep 14 12:08:07.579: INFO: Started pod liveness-390396d1-464f-4a79-b3e3-2794f36372d6 in namespace container-probe-3632 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 12:08:07.582: INFO: Initial restart count of pod liveness-390396d1-464f-4a79-b3e3-2794f36372d6 is 0 Sep 14 12:08:31.574: INFO: Restart count of pod container-probe-3632/liveness-390396d1-464f-4a79-b3e3-2794f36372d6 is now 1 (23.992348462s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:08:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3632" for this suite. • [SLOW TEST:29.877 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":750,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:08:33.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 14 12:08:38.013: INFO: Successfully updated pod "annotationupdatebc4aa12e-755a-46f4-be20-38b3d939316b" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:08:40.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9977" for this suite. • [SLOW TEST:6.670 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":752,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:08:40.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:08:51.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8752" for this suite. • [SLOW TEST:11.225 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":53,"skipped":769,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:08:51.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:08:51.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb" in namespace "downward-api-4762" to be "Succeeded or Failed" Sep 14 12:08:51.375: INFO: Pod "downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.623216ms Sep 14 12:08:53.394: INFO: Pod "downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049057453s Sep 14 12:08:55.398: INFO: Pod "downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05300331s STEP: Saw pod success Sep 14 12:08:55.398: INFO: Pod "downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb" satisfied condition "Succeeded or Failed" Sep 14 12:08:55.400: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb container client-container: STEP: delete the pod Sep 14 12:08:55.564: INFO: Waiting for pod downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb to disappear Sep 14 12:08:55.612: INFO: Pod downwardapi-volume-cd521324-02ba-4c28-8be6-79c8c41d9abb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:08:55.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4762" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":777,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:08:55.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 14 12:08:55.661: INFO: PodSpec: initContainers in spec.initContainers Sep 14 12:09:51.257: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d14aec50-071a-41b1-8c98-56aa14e1288a", GenerateName:"", Namespace:"init-container-1932", SelfLink:"/api/v1/namespaces/init-container-1932/pods/pod-init-d14aec50-071a-41b1-8c98-56aa14e1288a", UID:"d2716bff-6576-4c67-b5fc-6d8366d54cb9", ResourceVersion:"257518", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735682135, loc:(*time.Location)(0x7702840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"661921673"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00339c300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00339c320)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00339c340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00339c360)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rpzgp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0019c9580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rpzgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rpzgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rpzgp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003186c88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002198770), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003186d10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003186d30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003186d38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003186d3c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003a26a90), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682135, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682135, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682135, loc:(*time.Location)(0x7702840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682135, loc:(*time.Location)(0x7702840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.16", PodIP:"10.244.2.49", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.49"}}, StartTime:(*v1.Time)(0xc00339c380), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021988c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002198930)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ae89f157ffcad5147b41fa1ef41095bc0ea69e0c42139b931c8444ea053b95de", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00339c3c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00339c3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc003186dbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:09:51.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1932" for this suite. • [SLOW TEST:55.673 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":55,"skipped":784,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:09:51.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 14 12:09:52.414: INFO: starting watch STEP: patching STEP: updating Sep 14 12:09:52.422: INFO: waiting for watch events with expected annotations Sep 14 12:09:52.422: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:09:52.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4707" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":56,"skipped":793,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:09:52.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:09:52.675: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 14 12:09:57.679: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 14 12:09:57.679: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 14 12:09:57.725: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9000 /apis/apps/v1/namespaces/deployment-9000/deployments/test-cleanup-deployment 382ca73c-fa3c-4692-a55d-6b3d9ea55cc9 257582 1 2020-09-14 12:09:57 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-09-14 12:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000913bc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Sep 14 12:09:57.774: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-9000 /apis/apps/v1/namespaces/deployment-9000/replicasets/test-cleanup-deployment-5d446bdd47 8e962a88-1c27-49ef-9719-e575c71beeb7 257584 1 2020-09-14 12:09:57 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 382ca73c-fa3c-4692-a55d-6b3d9ea55cc9 0xc002b520a7 0xc002b520a8}] [] [{kube-controller-manager Update apps/v1 2020-09-14 12:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"382ca73c-fa3c-4692-a55d-6b3d9ea55cc9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b52278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:09:57.774: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 14 12:09:57.774: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9000 /apis/apps/v1/namespaces/deployment-9000/replicasets/test-cleanup-controller f18156f0-28f0-4ea1-8df7-edda81dea2ae 257583 1 2020-09-14 12:09:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 382ca73c-fa3c-4692-a55d-6b3d9ea55cc9 0xc000de5bd7 0xc000de5bd8}] [] [{e2e.test Update apps/v1 2020-09-14 12:09:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 12:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"382ca73c-fa3c-4692-a55d-6b3d9ea55cc9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000de5d08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:09:57.822: INFO: Pod "test-cleanup-controller-j6swp" is available: &Pod{ObjectMeta:{test-cleanup-controller-j6swp test-cleanup-controller- deployment-9000 /api/v1/namespaces/deployment-9000/pods/test-cleanup-controller-j6swp 0e8a98c5-5442-450d-8093-f43d714cba01 257564 0 2020-09-14 12:09:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller f18156f0-28f0-4ea1-8df7-edda81dea2ae 0xc003ff2ad7 0xc003ff2ad8}] [] [{kube-controller-manager Update v1 2020-09-14 12:09:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f18156f0-28f0-4ea1-8df7-edda81dea2ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:09:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.50\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnlns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnlns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnlns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:09:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:09:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:09:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:09:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.50,StartTime:2020-09-14 12:09:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:09:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fc1157b5a4bd074790a356acd6c8966faf10d945b262e281576d6076c863ab8f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:09:57.822: INFO: Pod "test-cleanup-deployment-5d446bdd47-phgr7" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-phgr7 test-cleanup-deployment-5d446bdd47- deployment-9000 /api/v1/namespaces/deployment-9000/pods/test-cleanup-deployment-5d446bdd47-phgr7 521fc46f-a435-4355-94a3-1af2a85af1e5 257590 0 2020-09-14 12:09:57 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 8e962a88-1c27-49ef-9719-e575c71beeb7 0xc003ff2c97 0xc003ff2c98}] [] [{kube-controller-manager Update v1 2020-09-14 12:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e962a88-1c27-49ef-9719-e575c71beeb7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xnlns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xnlns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xnlns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:09:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:09:57.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9000" for this suite. • [SLOW TEST:5.350 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":57,"skipped":806,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:09:57.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:10:02.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7628" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":58,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:10:02.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0914 12:10:04.622291 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 12:11:06.641: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:11:06.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7487" for this suite. • [SLOW TEST:64.041 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":59,"skipped":885,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:11:06.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4862/secret-test-fe721f4e-23d3-4d9c-8546-296932c4d7ca STEP: Creating a pod to test consume secrets Sep 14 12:11:06.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7" in namespace "secrets-4862" to be "Succeeded or Failed" Sep 14 12:11:06.827: INFO: Pod "pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 49.446724ms Sep 14 12:11:08.847: INFO: Pod "pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069810862s Sep 14 12:11:10.851: INFO: Pod "pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073725651s STEP: Saw pod success Sep 14 12:11:10.851: INFO: Pod "pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7" satisfied condition "Succeeded or Failed" Sep 14 12:11:10.854: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7 container env-test: STEP: delete the pod Sep 14 12:11:10.911: INFO: Waiting for pod pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7 to disappear Sep 14 12:11:10.921: INFO: Pod pod-configmaps-8100b8eb-b3cc-4227-850b-2f563bc5d6b7 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:11:10.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4862" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:11:10.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:11.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5823" for this suite. • [SLOW TEST:60.081 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":920,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:11.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-c0261e46-a189-41fc-83e9-3a843e108151 STEP: Creating a pod to test consume configMaps Sep 14 12:12:11.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e" in namespace "projected-577" to be "Succeeded or Failed" Sep 14 12:12:11.222: INFO: Pod "pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846249ms Sep 14 12:12:13.228: INFO: Pod "pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00848153s Sep 14 12:12:15.259: INFO: Pod "pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039953111s STEP: Saw pod success Sep 14 12:12:15.259: INFO: Pod "pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e" satisfied condition "Succeeded or Failed" Sep 14 12:12:15.263: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e container projected-configmap-volume-test: STEP: delete the pod Sep 14 12:12:15.297: INFO: Waiting for pod pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e to disappear Sep 14 12:12:15.310: INFO: Pod pod-projected-configmaps-40f43df1-64c8-4994-9ef1-90a36494846e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:15.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-577" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":925,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:15.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 14 12:12:25.507: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 14 12:12:25.522: INFO: Pod pod-with-prestop-exec-hook still exists Sep 14 12:12:27.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 14 12:12:27.527: INFO: Pod pod-with-prestop-exec-hook still exists Sep 14 12:12:29.523: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 14 12:12:29.527: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:29.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3093" for this suite. • [SLOW TEST:14.225 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":932,"failed":0} [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:29.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-f64504b1-d41f-4fcc-91f2-f532fb42be4f STEP: Creating a pod to test consume secrets Sep 14 12:12:29.618: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0" in namespace "projected-4301" to be "Succeeded or Failed" Sep 14 12:12:29.650: INFO: Pod "pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.002788ms Sep 14 12:12:31.654: INFO: Pod "pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036001519s Sep 14 12:12:33.665: INFO: Pod "pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046981886s STEP: Saw pod success Sep 14 12:12:33.665: INFO: Pod "pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0" satisfied condition "Succeeded or Failed" Sep 14 12:12:33.667: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0 container secret-volume-test: STEP: delete the pod Sep 14 12:12:33.713: INFO: Waiting for pod pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0 to disappear Sep 14 12:12:33.730: INFO: Pod pod-projected-secrets-8c8c19ad-92de-473f-a583-e34e4677fca0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:33.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4301" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":932,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:33.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 14 12:12:33.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1046' Sep 14 12:12:34.137: INFO: stderr: "" Sep 14 12:12:34.137: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 14 12:12:35.157: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:35.157: INFO: Found 0 / 1 Sep 14 12:12:36.142: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:36.142: INFO: Found 0 / 1 Sep 14 12:12:37.144: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:37.144: INFO: Found 0 / 1 Sep 14 12:12:38.142: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:38.142: INFO: Found 1 / 1 Sep 14 12:12:38.142: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 14 12:12:38.145: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:38.145: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 14 12:12:38.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config patch pod agnhost-primary-vtrzf --namespace=kubectl-1046 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 14 12:12:38.245: INFO: stderr: "" Sep 14 12:12:38.245: INFO: stdout: "pod/agnhost-primary-vtrzf patched\n" STEP: checking annotations Sep 14 12:12:38.261: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:12:38.261: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:38.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1046" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":65,"skipped":936,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:38.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9223/configmap-test-d203de38-f92b-4fe2-8f63-5177cc7466ca STEP: Creating a pod to test consume configMaps Sep 14 12:12:38.385: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631" in namespace "configmap-9223" to be "Succeeded or Failed" Sep 14 12:12:38.414: INFO: Pod "pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631": Phase="Pending", Reason="", readiness=false. Elapsed: 28.189036ms Sep 14 12:12:40.418: INFO: Pod "pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032185667s Sep 14 12:12:42.422: INFO: Pod "pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036266196s STEP: Saw pod success Sep 14 12:12:42.422: INFO: Pod "pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631" satisfied condition "Succeeded or Failed" Sep 14 12:12:42.425: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631 container env-test: STEP: delete the pod Sep 14 12:12:42.456: INFO: Waiting for pod pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631 to disappear Sep 14 12:12:42.479: INFO: Pod pod-configmaps-2c510a60-4ce3-4264-9077-33247bff6631 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:42.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9223" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":940,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:42.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 14 12:12:42.589: INFO: Waiting up to 5m0s for pod "client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1" in namespace "containers-2313" to be "Succeeded or Failed" Sep 14 12:12:42.615: INFO: Pod "client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.419124ms Sep 14 12:12:44.619: INFO: Pod "client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029923571s Sep 14 12:12:46.623: INFO: Pod "client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034190688s STEP: Saw pod success Sep 14 12:12:46.623: INFO: Pod "client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1" satisfied condition "Succeeded or Failed" Sep 14 12:12:46.627: INFO: Trying to get logs from node latest-worker2 pod client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1 container test-container: STEP: delete the pod Sep 14 12:12:46.666: INFO: Waiting for pod client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1 to disappear Sep 14 12:12:46.671: INFO: Pod client-containers-a42f0735-8292-4b0e-80c9-6c3b7bf874b1 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:46.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2313" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":67,"skipped":948,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:46.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:12:47.216: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:12:49.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682367, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682367, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682367, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682367, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:12:52.263: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:12:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6917" for this suite. STEP: Destroying namespace "webhook-6917-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.953 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":68,"skipped":953,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:12:52.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:05.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9078" for this suite. • [SLOW TEST:13.294 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":69,"skipped":961,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:05.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:12.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2085" for this suite. STEP: Destroying namespace "nsdeletetest-5220" for this suite. Sep 14 12:13:12.350: INFO: Namespace nsdeletetest-5220 was already deleted STEP: Destroying namespace "nsdeletetest-5867" for this suite. • [SLOW TEST:6.414 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":70,"skipped":963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:12.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 14 12:13:12.434: INFO: Waiting up to 5m0s for pod "pod-02466dca-554e-4f8d-8ca5-c4e5127016a8" in namespace "emptydir-1506" to be "Succeeded or Failed" Sep 14 12:13:12.440: INFO: Pod "pod-02466dca-554e-4f8d-8ca5-c4e5127016a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061727ms Sep 14 12:13:14.445: INFO: Pod "pod-02466dca-554e-4f8d-8ca5-c4e5127016a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011075523s Sep 14 12:13:16.450: INFO: Pod "pod-02466dca-554e-4f8d-8ca5-c4e5127016a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015684701s STEP: Saw pod success Sep 14 12:13:16.450: INFO: Pod "pod-02466dca-554e-4f8d-8ca5-c4e5127016a8" satisfied condition "Succeeded or Failed" Sep 14 12:13:16.453: INFO: Trying to get logs from node latest-worker2 pod pod-02466dca-554e-4f8d-8ca5-c4e5127016a8 container test-container: STEP: delete the pod Sep 14 12:13:16.471: INFO: Waiting for pod pod-02466dca-554e-4f8d-8ca5-c4e5127016a8 to disappear Sep 14 12:13:16.476: INFO: Pod pod-02466dca-554e-4f8d-8ca5-c4e5127016a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:16.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1506" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":71,"skipped":1017,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:16.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 14 12:13:16.593: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 14 12:13:27.424: INFO: >>> kubeConfig: /root/.kube/config Sep 14 12:13:30.470: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:41.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-311" for this suite. • [SLOW TEST:25.351 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":72,"skipped":1022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:41.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 14 12:13:41.909: INFO: >>> kubeConfig: /root/.kube/config Sep 14 12:13:44.859: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:55.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7512" for this suite. • [SLOW TEST:13.795 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":73,"skipped":1046,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:55.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 14 12:13:55.701: INFO: Waiting up to 5m0s for pod "client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313" in namespace "containers-1167" to be "Succeeded or Failed" Sep 14 12:13:55.735: INFO: Pod "client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313": Phase="Pending", Reason="", readiness=false. Elapsed: 33.934967ms Sep 14 12:13:57.738: INFO: Pod "client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037242057s Sep 14 12:13:59.745: INFO: Pod "client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043524742s STEP: Saw pod success Sep 14 12:13:59.745: INFO: Pod "client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313" satisfied condition "Succeeded or Failed" Sep 14 12:13:59.748: INFO: Trying to get logs from node latest-worker2 pod client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313 container test-container: STEP: delete the pod Sep 14 12:13:59.832: INFO: Waiting for pod client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313 to disappear Sep 14 12:13:59.843: INFO: Pod client-containers-f9e1cd23-5a39-4c9e-a141-8ba00eddf313 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:13:59.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1167" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1052,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:13:59.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 14 12:13:59.924: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 14 12:13:59.947: INFO: Waiting for terminating namespaces to be deleted... Sep 14 12:13:59.950: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 14 12:13:59.957: INFO: coredns-f9fd979d6-rckh5 from kube-system started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.957: INFO: Container coredns ready: true, restart count 0 Sep 14 12:13:59.957: INFO: coredns-f9fd979d6-rtr7c from kube-system started at 2020-09-13 17:00:07 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.957: INFO: Container coredns ready: true, restart count 0 Sep 14 12:13:59.957: INFO: kindnet-x9kfh from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.957: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:13:59.957: INFO: kube-proxy-484ff from kube-system started at 2020-09-13 16:59:36 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.957: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:13:59.957: INFO: local-path-provisioner-78776bfc44-ks8gr from local-path-storage started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.957: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 14 12:13:59.957: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 14 12:13:59.962: INFO: kindnet-6mthj from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.962: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:13:59.962: INFO: kube-proxy-thrnr from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:13:59.963: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b4ea368c-609d-4635-99b3-cd8f0dcea262 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b4ea368c-609d-4635-99b3-cd8f0dcea262 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b4ea368c-609d-4635-99b3-cd8f0dcea262 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:19:08.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5018" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.342 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":75,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:19:08.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:19:09.054: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:19:11.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682749, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682749, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682749, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682748, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:19:14.108: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:19:14.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8913" for this suite. STEP: Destroying namespace "webhook-8913-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.371 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":76,"skipped":1092,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:19:14.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7503, will wait for the garbage collector to delete the pods Sep 14 12:19:20.730: INFO: Deleting Job.batch foo took: 6.638807ms Sep 14 12:19:20.831: INFO: Terminating Job.batch foo pods took: 100.293446ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:19:56.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7503" for this suite. • [SLOW TEST:41.479 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":77,"skipped":1107,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:19:56.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:19:56.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:19:58.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682796, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682796, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682796, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735682796, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:20:01.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:20:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5551" for this suite. STEP: Destroying namespace "webhook-5551-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.972 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":78,"skipped":1121,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:20:14.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:20:31.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-559" for this suite. • [SLOW TEST:17.152 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":79,"skipped":1122,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:20:31.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 14 12:20:31.248: INFO: Waiting up to 5m0s for pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3" in namespace "emptydir-9792" to be "Succeeded or Failed" Sep 14 12:20:31.252: INFO: Pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.474766ms Sep 14 12:20:33.258: INFO: Pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00929884s Sep 14 12:20:35.261: INFO: Pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3": Phase="Running", Reason="", readiness=true. Elapsed: 4.012873046s Sep 14 12:20:37.267: INFO: Pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01845139s STEP: Saw pod success Sep 14 12:20:37.267: INFO: Pod "pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3" satisfied condition "Succeeded or Failed" Sep 14 12:20:37.270: INFO: Trying to get logs from node latest-worker2 pod pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3 container test-container: STEP: delete the pod Sep 14 12:20:37.315: INFO: Waiting for pod pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3 to disappear Sep 14 12:20:37.326: INFO: Pod pod-cf3b7c9c-5905-4c59-8df0-17b2c44521d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:20:37.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9792" for this suite. • [SLOW TEST:6.143 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1124,"failed":0} SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:20:37.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:20:37.465: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7723 I0914 12:20:37.487472 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7723, replica count: 1 I0914 12:20:38.537894 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:20:39.538201 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:20:40.538455 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 12:20:40.689: INFO: Created: latency-svc-65rfn Sep 14 12:20:40.758: INFO: Got endpoints: latency-svc-65rfn [119.823162ms] Sep 14 12:20:40.810: INFO: Created: latency-svc-tt8t2 Sep 14 12:20:40.837: INFO: Got endpoints: latency-svc-tt8t2 [78.627815ms] Sep 14 12:20:40.852: INFO: Created: latency-svc-md4xm Sep 14 12:20:40.882: INFO: Got endpoints: latency-svc-md4xm [123.47635ms] Sep 14 12:20:40.888: INFO: Created: latency-svc-xkh9w Sep 14 12:20:40.901: INFO: Got endpoints: latency-svc-xkh9w [142.768275ms] Sep 14 12:20:40.925: INFO: Created: latency-svc-mv58w Sep 14 12:20:40.938: INFO: Got endpoints: latency-svc-mv58w [179.151475ms] Sep 14 12:20:40.958: INFO: Created: latency-svc-hcd9w Sep 14 12:20:41.031: INFO: Got endpoints: latency-svc-hcd9w [273.080499ms] Sep 14 12:20:41.042: INFO: Created: latency-svc-6r6j8 Sep 14 12:20:41.059: INFO: Got endpoints: latency-svc-6r6j8 [300.623677ms] Sep 14 12:20:41.090: INFO: Created: latency-svc-wp6pg Sep 14 12:20:41.102: INFO: Got endpoints: latency-svc-wp6pg [343.09119ms] Sep 14 12:20:41.120: INFO: Created: latency-svc-jlflz Sep 14 12:20:41.175: INFO: Got endpoints: latency-svc-jlflz [416.604622ms] Sep 14 12:20:41.206: INFO: Created: latency-svc-dfxgr Sep 14 12:20:41.228: INFO: Got endpoints: latency-svc-dfxgr [469.888416ms] Sep 14 12:20:41.248: INFO: Created: latency-svc-f7dg5 Sep 14 12:20:41.264: INFO: Got endpoints: latency-svc-f7dg5 [505.878285ms] Sep 14 12:20:41.318: INFO: Created: latency-svc-86mf6 Sep 14 12:20:41.335: INFO: Got endpoints: latency-svc-86mf6 [576.408687ms] Sep 14 12:20:41.354: INFO: Created: latency-svc-wgwsq Sep 14 12:20:41.377: INFO: Got endpoints: latency-svc-wgwsq [618.215097ms] Sep 14 12:20:41.396: INFO: Created: latency-svc-2vww7 Sep 14 12:20:41.451: INFO: Got endpoints: latency-svc-2vww7 [692.422841ms] Sep 14 12:20:41.470: INFO: Created: latency-svc-5hhxz Sep 14 12:20:41.500: INFO: Got endpoints: latency-svc-5hhxz [741.673784ms] Sep 14 12:20:41.546: INFO: Created: latency-svc-qktjn Sep 14 12:20:41.583: INFO: Got endpoints: latency-svc-qktjn [824.00077ms] Sep 14 12:20:41.629: INFO: Created: latency-svc-92pvz Sep 14 12:20:41.654: INFO: Got endpoints: latency-svc-92pvz [816.826907ms] Sep 14 12:20:41.683: INFO: Created: latency-svc-zmwcn Sep 14 12:20:41.732: INFO: Got endpoints: latency-svc-zmwcn [850.7758ms] Sep 14 12:20:41.788: INFO: Created: latency-svc-g7tn8 Sep 14 12:20:41.800: INFO: Got endpoints: latency-svc-g7tn8 [898.609071ms] Sep 14 12:20:41.822: INFO: Created: latency-svc-5qfq7 Sep 14 12:20:41.876: INFO: Got endpoints: latency-svc-5qfq7 [938.263442ms] Sep 14 12:20:41.887: INFO: Created: latency-svc-vc59r Sep 14 12:20:41.902: INFO: Got endpoints: latency-svc-vc59r [870.421743ms] Sep 14 12:20:41.924: INFO: Created: latency-svc-sbhpt Sep 14 12:20:41.932: INFO: Got endpoints: latency-svc-sbhpt [872.942368ms] Sep 14 12:20:41.950: INFO: Created: latency-svc-xqknm Sep 14 12:20:41.963: INFO: Got endpoints: latency-svc-xqknm [861.318928ms] Sep 14 12:20:42.020: INFO: Created: latency-svc-c9qh8 Sep 14 12:20:42.035: INFO: Got endpoints: latency-svc-c9qh8 [859.43339ms] Sep 14 12:20:42.076: INFO: Created: latency-svc-mr46v Sep 14 12:20:42.102: INFO: Got endpoints: latency-svc-mr46v [873.483743ms] Sep 14 12:20:42.157: INFO: Created: latency-svc-zttzj Sep 14 12:20:42.174: INFO: Got endpoints: latency-svc-zttzj [909.328979ms] Sep 14 12:20:42.206: INFO: Created: latency-svc-sbxqt Sep 14 12:20:42.214: INFO: Got endpoints: latency-svc-sbxqt [879.251616ms] Sep 14 12:20:42.237: INFO: Created: latency-svc-q42fm Sep 14 12:20:42.277: INFO: Got endpoints: latency-svc-q42fm [900.092311ms] Sep 14 12:20:42.291: INFO: Created: latency-svc-6x8vn Sep 14 12:20:42.305: INFO: Got endpoints: latency-svc-6x8vn [854.044645ms] Sep 14 12:20:42.334: INFO: Created: latency-svc-8trzf Sep 14 12:20:42.347: INFO: Got endpoints: latency-svc-8trzf [847.155741ms] Sep 14 12:20:42.433: INFO: Created: latency-svc-f4thd Sep 14 12:20:42.450: INFO: Got endpoints: latency-svc-f4thd [866.960252ms] Sep 14 12:20:42.512: INFO: Created: latency-svc-crdds Sep 14 12:20:42.529: INFO: Got endpoints: latency-svc-crdds [875.206694ms] Sep 14 12:20:42.612: INFO: Created: latency-svc-dx5t5 Sep 14 12:20:42.617: INFO: Got endpoints: latency-svc-dx5t5 [884.788932ms] Sep 14 12:20:42.676: INFO: Created: latency-svc-l7gww Sep 14 12:20:42.685: INFO: Got endpoints: latency-svc-l7gww [885.035625ms] Sep 14 12:20:42.750: INFO: Created: latency-svc-dhg8d Sep 14 12:20:42.775: INFO: Got endpoints: latency-svc-dhg8d [899.132579ms] Sep 14 12:20:42.799: INFO: Created: latency-svc-f7f76 Sep 14 12:20:42.811: INFO: Got endpoints: latency-svc-f7f76 [908.754355ms] Sep 14 12:20:42.830: INFO: Created: latency-svc-pcqmh Sep 14 12:20:42.847: INFO: Got endpoints: latency-svc-pcqmh [914.855426ms] Sep 14 12:20:42.882: INFO: Created: latency-svc-wlrwv Sep 14 12:20:42.895: INFO: Got endpoints: latency-svc-wlrwv [932.487969ms] Sep 14 12:20:42.915: INFO: Created: latency-svc-7xrf7 Sep 14 12:20:42.926: INFO: Got endpoints: latency-svc-7xrf7 [890.85587ms] Sep 14 12:20:42.961: INFO: Created: latency-svc-v975j Sep 14 12:20:42.974: INFO: Got endpoints: latency-svc-v975j [871.890198ms] Sep 14 12:20:43.048: INFO: Created: latency-svc-k7nfq Sep 14 12:20:43.053: INFO: Got endpoints: latency-svc-k7nfq [878.84703ms] Sep 14 12:20:43.071: INFO: Created: latency-svc-n7b4v Sep 14 12:20:43.082: INFO: Got endpoints: latency-svc-n7b4v [867.981844ms] Sep 14 12:20:43.101: INFO: Created: latency-svc-mvnrq Sep 14 12:20:43.113: INFO: Got endpoints: latency-svc-mvnrq [835.613204ms] Sep 14 12:20:43.131: INFO: Created: latency-svc-6z654 Sep 14 12:20:43.143: INFO: Got endpoints: latency-svc-6z654 [837.670493ms] Sep 14 12:20:43.189: INFO: Created: latency-svc-qc6tt Sep 14 12:20:43.199: INFO: Got endpoints: latency-svc-qc6tt [852.037182ms] Sep 14 12:20:43.326: INFO: Created: latency-svc-sqhwx Sep 14 12:20:43.329: INFO: Got endpoints: latency-svc-sqhwx [879.427432ms] Sep 14 12:20:43.359: INFO: Created: latency-svc-ppw98 Sep 14 12:20:43.371: INFO: Got endpoints: latency-svc-ppw98 [842.276861ms] Sep 14 12:20:43.401: INFO: Created: latency-svc-zdprh Sep 14 12:20:43.457: INFO: Got endpoints: latency-svc-zdprh [839.081731ms] Sep 14 12:20:43.501: INFO: Created: latency-svc-hk64v Sep 14 12:20:43.516: INFO: Got endpoints: latency-svc-hk64v [831.252014ms] Sep 14 12:20:43.536: INFO: Created: latency-svc-djgjs Sep 14 12:20:43.553: INFO: Got endpoints: latency-svc-djgjs [778.307834ms] Sep 14 12:20:43.622: INFO: Created: latency-svc-pv5tr Sep 14 12:20:43.637: INFO: Got endpoints: latency-svc-pv5tr [826.504017ms] Sep 14 12:20:43.689: INFO: Created: latency-svc-dwjxl Sep 14 12:20:43.697: INFO: Got endpoints: latency-svc-dwjxl [849.85504ms] Sep 14 12:20:43.753: INFO: Created: latency-svc-k59sw Sep 14 12:20:43.783: INFO: Got endpoints: latency-svc-k59sw [887.163976ms] Sep 14 12:20:43.812: INFO: Created: latency-svc-zj2t8 Sep 14 12:20:43.870: INFO: Got endpoints: latency-svc-zj2t8 [944.009224ms] Sep 14 12:20:43.893: INFO: Created: latency-svc-4p9sf Sep 14 12:20:43.911: INFO: Got endpoints: latency-svc-4p9sf [936.572256ms] Sep 14 12:20:43.941: INFO: Created: latency-svc-kzfd9 Sep 14 12:20:43.962: INFO: Got endpoints: latency-svc-kzfd9 [909.648151ms] Sep 14 12:20:44.008: INFO: Created: latency-svc-pc7lp Sep 14 12:20:44.049: INFO: Got endpoints: latency-svc-pc7lp [966.415443ms] Sep 14 12:20:44.170: INFO: Created: latency-svc-465vg Sep 14 12:20:44.184: INFO: Got endpoints: latency-svc-465vg [1.071232192s] Sep 14 12:20:44.208: INFO: Created: latency-svc-sfs9j Sep 14 12:20:44.221: INFO: Got endpoints: latency-svc-sfs9j [1.077992519s] Sep 14 12:20:44.238: INFO: Created: latency-svc-nvt98 Sep 14 12:20:44.251: INFO: Got endpoints: latency-svc-nvt98 [1.051491651s] Sep 14 12:20:44.268: INFO: Created: latency-svc-bkr2b Sep 14 12:20:44.301: INFO: Got endpoints: latency-svc-bkr2b [971.885782ms] Sep 14 12:20:44.361: INFO: Created: latency-svc-6msfl Sep 14 12:20:44.372: INFO: Got endpoints: latency-svc-6msfl [1.000196046s] Sep 14 12:20:44.439: INFO: Created: latency-svc-pb7sf Sep 14 12:20:44.443: INFO: Got endpoints: latency-svc-pb7sf [986.420032ms] Sep 14 12:20:44.496: INFO: Created: latency-svc-nkz6d Sep 14 12:20:44.503: INFO: Got endpoints: latency-svc-nkz6d [986.822952ms] Sep 14 12:20:44.577: INFO: Created: latency-svc-p4gt5 Sep 14 12:20:44.613: INFO: Got endpoints: latency-svc-p4gt5 [1.059243924s] Sep 14 12:20:44.642: INFO: Created: latency-svc-nghls Sep 14 12:20:44.655: INFO: Got endpoints: latency-svc-nghls [1.017961385s] Sep 14 12:20:44.672: INFO: Created: latency-svc-ppcf7 Sep 14 12:20:44.696: INFO: Got endpoints: latency-svc-ppcf7 [999.556351ms] Sep 14 12:20:44.718: INFO: Created: latency-svc-wn22k Sep 14 12:20:44.734: INFO: Got endpoints: latency-svc-wn22k [951.028374ms] Sep 14 12:20:44.754: INFO: Created: latency-svc-2b6hh Sep 14 12:20:44.764: INFO: Got endpoints: latency-svc-2b6hh [894.034768ms] Sep 14 12:20:44.785: INFO: Created: latency-svc-kn7vf Sep 14 12:20:44.794: INFO: Got endpoints: latency-svc-kn7vf [883.514294ms] Sep 14 12:20:44.834: INFO: Created: latency-svc-mnfxn Sep 14 12:20:44.837: INFO: Got endpoints: latency-svc-mnfxn [874.776013ms] Sep 14 12:20:44.864: INFO: Created: latency-svc-x7ljt Sep 14 12:20:44.879: INFO: Got endpoints: latency-svc-x7ljt [829.95645ms] Sep 14 12:20:44.901: INFO: Created: latency-svc-g28ms Sep 14 12:20:44.915: INFO: Got endpoints: latency-svc-g28ms [731.345754ms] Sep 14 12:20:44.971: INFO: Created: latency-svc-5vmkr Sep 14 12:20:44.976: INFO: Got endpoints: latency-svc-5vmkr [755.207388ms] Sep 14 12:20:45.000: INFO: Created: latency-svc-4k6r9 Sep 14 12:20:45.012: INFO: Got endpoints: latency-svc-4k6r9 [760.765888ms] Sep 14 12:20:45.034: INFO: Created: latency-svc-st8df Sep 14 12:20:45.042: INFO: Got endpoints: latency-svc-st8df [740.447939ms] Sep 14 12:20:45.060: INFO: Created: latency-svc-6g6dq Sep 14 12:20:45.115: INFO: Got endpoints: latency-svc-6g6dq [743.338291ms] Sep 14 12:20:45.118: INFO: Created: latency-svc-thb8p Sep 14 12:20:45.126: INFO: Got endpoints: latency-svc-thb8p [682.699469ms] Sep 14 12:20:45.146: INFO: Created: latency-svc-vrcvh Sep 14 12:20:45.163: INFO: Got endpoints: latency-svc-vrcvh [659.890663ms] Sep 14 12:20:45.212: INFO: Created: latency-svc-j7tgv Sep 14 12:20:45.265: INFO: Got endpoints: latency-svc-j7tgv [652.470029ms] Sep 14 12:20:45.283: INFO: Created: latency-svc-vttrf Sep 14 12:20:45.324: INFO: Got endpoints: latency-svc-vttrf [668.799489ms] Sep 14 12:20:45.398: INFO: Created: latency-svc-28pwn Sep 14 12:20:45.465: INFO: Got endpoints: latency-svc-28pwn [768.155706ms] Sep 14 12:20:45.522: INFO: Created: latency-svc-p52kt Sep 14 12:20:45.546: INFO: Got endpoints: latency-svc-p52kt [812.153576ms] Sep 14 12:20:45.570: INFO: Created: latency-svc-fls2r Sep 14 12:20:45.602: INFO: Got endpoints: latency-svc-fls2r [838.354159ms] Sep 14 12:20:45.673: INFO: Created: latency-svc-hr5sn Sep 14 12:20:45.680: INFO: Got endpoints: latency-svc-hr5sn [885.750684ms] Sep 14 12:20:45.709: INFO: Created: latency-svc-qrl7m Sep 14 12:20:45.732: INFO: Got endpoints: latency-svc-qrl7m [894.162545ms] Sep 14 12:20:45.762: INFO: Created: latency-svc-wpkrg Sep 14 12:20:45.770: INFO: Got endpoints: latency-svc-wpkrg [891.339534ms] Sep 14 12:20:45.817: INFO: Created: latency-svc-nm57j Sep 14 12:20:45.820: INFO: Got endpoints: latency-svc-nm57j [904.934854ms] Sep 14 12:20:45.842: INFO: Created: latency-svc-vskq9 Sep 14 12:20:45.855: INFO: Got endpoints: latency-svc-vskq9 [879.201261ms] Sep 14 12:20:45.896: INFO: Created: latency-svc-g5kq6 Sep 14 12:20:45.909: INFO: Got endpoints: latency-svc-g5kq6 [897.660256ms] Sep 14 12:20:45.959: INFO: Created: latency-svc-r4vh7 Sep 14 12:20:45.963: INFO: Got endpoints: latency-svc-r4vh7 [921.399339ms] Sep 14 12:20:46.008: INFO: Created: latency-svc-lccjd Sep 14 12:20:46.024: INFO: Got endpoints: latency-svc-lccjd [908.816754ms] Sep 14 12:20:46.044: INFO: Created: latency-svc-qqwck Sep 14 12:20:46.109: INFO: Got endpoints: latency-svc-qqwck [983.667882ms] Sep 14 12:20:46.111: INFO: Created: latency-svc-d5d58 Sep 14 12:20:46.126: INFO: Got endpoints: latency-svc-d5d58 [962.673274ms] Sep 14 12:20:46.178: INFO: Created: latency-svc-fsvxd Sep 14 12:20:46.198: INFO: Got endpoints: latency-svc-fsvxd [933.033501ms] Sep 14 12:20:46.241: INFO: Created: latency-svc-kfqdv Sep 14 12:20:46.244: INFO: Got endpoints: latency-svc-kfqdv [920.194622ms] Sep 14 12:20:46.265: INFO: Created: latency-svc-8zwhn Sep 14 12:20:46.277: INFO: Got endpoints: latency-svc-8zwhn [812.218854ms] Sep 14 12:20:46.296: INFO: Created: latency-svc-6r7d6 Sep 14 12:20:46.313: INFO: Got endpoints: latency-svc-6r7d6 [767.27537ms] Sep 14 12:20:46.334: INFO: Created: latency-svc-xfj9m Sep 14 12:20:46.403: INFO: Got endpoints: latency-svc-xfj9m [800.866219ms] Sep 14 12:20:46.406: INFO: Created: latency-svc-nwqpv Sep 14 12:20:46.410: INFO: Got endpoints: latency-svc-nwqpv [730.115102ms] Sep 14 12:20:46.457: INFO: Created: latency-svc-6wmns Sep 14 12:20:46.470: INFO: Got endpoints: latency-svc-6wmns [738.900176ms] Sep 14 12:20:46.499: INFO: Created: latency-svc-zfwj6 Sep 14 12:20:46.555: INFO: Got endpoints: latency-svc-zfwj6 [785.261913ms] Sep 14 12:20:46.592: INFO: Created: latency-svc-cw84k Sep 14 12:20:46.690: INFO: Got endpoints: latency-svc-cw84k [869.629286ms] Sep 14 12:20:46.693: INFO: Created: latency-svc-mjh4l Sep 14 12:20:46.723: INFO: Got endpoints: latency-svc-mjh4l [868.213124ms] Sep 14 12:20:46.758: INFO: Created: latency-svc-tfhk6 Sep 14 12:20:46.788: INFO: Got endpoints: latency-svc-tfhk6 [878.299721ms] Sep 14 12:20:46.876: INFO: Created: latency-svc-jfrvn Sep 14 12:20:46.885: INFO: Got endpoints: latency-svc-jfrvn [921.951856ms] Sep 14 12:20:46.934: INFO: Created: latency-svc-gdws7 Sep 14 12:20:46.964: INFO: Got endpoints: latency-svc-gdws7 [939.745154ms] Sep 14 12:20:47.015: INFO: Created: latency-svc-lf2qf Sep 14 12:20:47.018: INFO: Got endpoints: latency-svc-lf2qf [908.7458ms] Sep 14 12:20:47.045: INFO: Created: latency-svc-r65wx Sep 14 12:20:47.054: INFO: Got endpoints: latency-svc-r65wx [927.979222ms] Sep 14 12:20:47.076: INFO: Created: latency-svc-nhh84 Sep 14 12:20:47.084: INFO: Got endpoints: latency-svc-nhh84 [885.62546ms] Sep 14 12:20:47.107: INFO: Created: latency-svc-spp26 Sep 14 12:20:47.139: INFO: Got endpoints: latency-svc-spp26 [894.987004ms] Sep 14 12:20:47.149: INFO: Created: latency-svc-xhfqt Sep 14 12:20:47.175: INFO: Got endpoints: latency-svc-xhfqt [898.175771ms] Sep 14 12:20:47.197: INFO: Created: latency-svc-qxhwv Sep 14 12:20:47.211: INFO: Got endpoints: latency-svc-qxhwv [897.982868ms] Sep 14 12:20:47.236: INFO: Created: latency-svc-4txhc Sep 14 12:20:47.283: INFO: Got endpoints: latency-svc-4txhc [879.755722ms] Sep 14 12:20:47.286: INFO: Created: latency-svc-6qhxv Sep 14 12:20:47.309: INFO: Got endpoints: latency-svc-6qhxv [898.871949ms] Sep 14 12:20:47.309: INFO: Created: latency-svc-pqkj4 Sep 14 12:20:47.320: INFO: Got endpoints: latency-svc-pqkj4 [849.174007ms] Sep 14 12:20:47.339: INFO: Created: latency-svc-nzd9s Sep 14 12:20:47.369: INFO: Got endpoints: latency-svc-nzd9s [813.134682ms] Sep 14 12:20:47.433: INFO: Created: latency-svc-tszdz Sep 14 12:20:47.446: INFO: Got endpoints: latency-svc-tszdz [756.352882ms] Sep 14 12:20:47.479: INFO: Created: latency-svc-xn8sg Sep 14 12:20:47.495: INFO: Got endpoints: latency-svc-xn8sg [771.087079ms] Sep 14 12:20:47.515: INFO: Created: latency-svc-q98pr Sep 14 12:20:47.531: INFO: Got endpoints: latency-svc-q98pr [743.271021ms] Sep 14 12:20:47.577: INFO: Created: latency-svc-nqp5f Sep 14 12:20:47.603: INFO: Got endpoints: latency-svc-nqp5f [717.90635ms] Sep 14 12:20:47.604: INFO: Created: latency-svc-tqtqv Sep 14 12:20:47.616: INFO: Got endpoints: latency-svc-tqtqv [652.071205ms] Sep 14 12:20:47.639: INFO: Created: latency-svc-7xnfb Sep 14 12:20:47.652: INFO: Got endpoints: latency-svc-7xnfb [633.680917ms] Sep 14 12:20:47.671: INFO: Created: latency-svc-k4mqm Sep 14 12:20:47.732: INFO: Got endpoints: latency-svc-k4mqm [678.30368ms] Sep 14 12:20:47.750: INFO: Created: latency-svc-kzzrd Sep 14 12:20:47.777: INFO: Got endpoints: latency-svc-kzzrd [693.340294ms] Sep 14 12:20:47.801: INFO: Created: latency-svc-t8t46 Sep 14 12:20:47.825: INFO: Got endpoints: latency-svc-t8t46 [685.285515ms] Sep 14 12:20:47.882: INFO: Created: latency-svc-s7t4q Sep 14 12:20:47.899: INFO: Got endpoints: latency-svc-s7t4q [723.352418ms] Sep 14 12:20:47.917: INFO: Created: latency-svc-nrh9b Sep 14 12:20:47.929: INFO: Got endpoints: latency-svc-nrh9b [717.790191ms] Sep 14 12:20:47.948: INFO: Created: latency-svc-qxf77 Sep 14 12:20:47.960: INFO: Got endpoints: latency-svc-qxf77 [676.576974ms] Sep 14 12:20:47.977: INFO: Created: latency-svc-xr9zg Sep 14 12:20:48.032: INFO: Got endpoints: latency-svc-xr9zg [722.560537ms] Sep 14 12:20:48.034: INFO: Created: latency-svc-sqcj9 Sep 14 12:20:48.050: INFO: Got endpoints: latency-svc-sqcj9 [730.577788ms] Sep 14 12:20:48.083: INFO: Created: latency-svc-zkjv2 Sep 14 12:20:48.099: INFO: Got endpoints: latency-svc-zkjv2 [729.920438ms] Sep 14 12:20:48.121: INFO: Created: latency-svc-qvqqs Sep 14 12:20:48.169: INFO: Got endpoints: latency-svc-qvqqs [722.644243ms] Sep 14 12:20:48.172: INFO: Created: latency-svc-mwkrd Sep 14 12:20:48.182: INFO: Got endpoints: latency-svc-mwkrd [687.830904ms] Sep 14 12:20:48.233: INFO: Created: latency-svc-jlczj Sep 14 12:20:48.325: INFO: Got endpoints: latency-svc-jlczj [793.918593ms] Sep 14 12:20:48.343: INFO: Created: latency-svc-954pj Sep 14 12:20:48.377: INFO: Got endpoints: latency-svc-954pj [773.648106ms] Sep 14 12:20:48.413: INFO: Created: latency-svc-vvspg Sep 14 12:20:48.475: INFO: Got endpoints: latency-svc-vvspg [858.67466ms] Sep 14 12:20:48.504: INFO: Created: latency-svc-br89g Sep 14 12:20:48.529: INFO: Got endpoints: latency-svc-br89g [876.992494ms] Sep 14 12:20:48.613: INFO: Created: latency-svc-kvqlx Sep 14 12:20:48.662: INFO: Got endpoints: latency-svc-kvqlx [929.890096ms] Sep 14 12:20:48.683: INFO: Created: latency-svc-9sd89 Sep 14 12:20:48.699: INFO: Got endpoints: latency-svc-9sd89 [921.117151ms] Sep 14 12:20:48.762: INFO: Created: latency-svc-ht9sd Sep 14 12:20:48.777: INFO: Got endpoints: latency-svc-ht9sd [952.221406ms] Sep 14 12:20:48.801: INFO: Created: latency-svc-rmrfh Sep 14 12:20:48.813: INFO: Got endpoints: latency-svc-rmrfh [914.660203ms] Sep 14 12:20:48.907: INFO: Created: latency-svc-7fx7s Sep 14 12:20:48.911: INFO: Got endpoints: latency-svc-7fx7s [982.107326ms] Sep 14 12:20:48.962: INFO: Created: latency-svc-4xm9b Sep 14 12:20:49.037: INFO: Got endpoints: latency-svc-4xm9b [1.077699299s] Sep 14 12:20:49.049: INFO: Created: latency-svc-x9t9d Sep 14 12:20:49.066: INFO: Got endpoints: latency-svc-x9t9d [1.034280674s] Sep 14 12:20:49.085: INFO: Created: latency-svc-qnv7b Sep 14 12:20:49.096: INFO: Got endpoints: latency-svc-qnv7b [1.045577381s] Sep 14 12:20:49.117: INFO: Created: latency-svc-b4bc8 Sep 14 12:20:49.133: INFO: Got endpoints: latency-svc-b4bc8 [1.033809888s] Sep 14 12:20:49.181: INFO: Created: latency-svc-r2kvf Sep 14 12:20:49.186: INFO: Got endpoints: latency-svc-r2kvf [1.017216662s] Sep 14 12:20:49.207: INFO: Created: latency-svc-nj8sr Sep 14 12:20:49.217: INFO: Got endpoints: latency-svc-nj8sr [1.034237953s] Sep 14 12:20:49.237: INFO: Created: latency-svc-rxdtb Sep 14 12:20:49.247: INFO: Got endpoints: latency-svc-rxdtb [921.620016ms] Sep 14 12:20:49.271: INFO: Created: latency-svc-7vbl6 Sep 14 12:20:49.319: INFO: Got endpoints: latency-svc-7vbl6 [942.143876ms] Sep 14 12:20:49.330: INFO: Created: latency-svc-bdlfw Sep 14 12:20:49.356: INFO: Got endpoints: latency-svc-bdlfw [881.323001ms] Sep 14 12:20:49.487: INFO: Created: latency-svc-p5p98 Sep 14 12:20:49.494: INFO: Got endpoints: latency-svc-p5p98 [965.138718ms] Sep 14 12:20:49.525: INFO: Created: latency-svc-96k5v Sep 14 12:20:49.536: INFO: Got endpoints: latency-svc-96k5v [874.232342ms] Sep 14 12:20:49.558: INFO: Created: latency-svc-5wkp7 Sep 14 12:20:49.643: INFO: Got endpoints: latency-svc-5wkp7 [944.168885ms] Sep 14 12:20:49.646: INFO: Created: latency-svc-65l9d Sep 14 12:20:49.668: INFO: Got endpoints: latency-svc-65l9d [891.052682ms] Sep 14 12:20:49.704: INFO: Created: latency-svc-fhhtm Sep 14 12:20:49.732: INFO: Got endpoints: latency-svc-fhhtm [918.479657ms] Sep 14 12:20:49.798: INFO: Created: latency-svc-cbbz9 Sep 14 12:20:49.807: INFO: Got endpoints: latency-svc-cbbz9 [895.856562ms] Sep 14 12:20:49.828: INFO: Created: latency-svc-bg75l Sep 14 12:20:49.844: INFO: Got endpoints: latency-svc-bg75l [806.359645ms] Sep 14 12:20:49.864: INFO: Created: latency-svc-j96ss Sep 14 12:20:49.880: INFO: Got endpoints: latency-svc-j96ss [813.904186ms] Sep 14 12:20:49.936: INFO: Created: latency-svc-skv5w Sep 14 12:20:49.940: INFO: Got endpoints: latency-svc-skv5w [843.923457ms] Sep 14 12:20:49.968: INFO: Created: latency-svc-lldmm Sep 14 12:20:49.976: INFO: Got endpoints: latency-svc-lldmm [843.674992ms] Sep 14 12:20:49.998: INFO: Created: latency-svc-rvc8j Sep 14 12:20:50.013: INFO: Got endpoints: latency-svc-rvc8j [826.12775ms] Sep 14 12:20:50.086: INFO: Created: latency-svc-f2pl4 Sep 14 12:20:50.134: INFO: Got endpoints: latency-svc-f2pl4 [917.583818ms] Sep 14 12:20:50.164: INFO: Created: latency-svc-jnp47 Sep 14 12:20:50.211: INFO: Got endpoints: latency-svc-jnp47 [964.248865ms] Sep 14 12:20:50.233: INFO: Created: latency-svc-m7gvf Sep 14 12:20:50.256: INFO: Got endpoints: latency-svc-m7gvf [937.329705ms] Sep 14 12:20:50.293: INFO: Created: latency-svc-pcmzv Sep 14 12:20:50.379: INFO: Got endpoints: latency-svc-pcmzv [1.022959282s] Sep 14 12:20:50.382: INFO: Created: latency-svc-t84rb Sep 14 12:20:50.417: INFO: Got endpoints: latency-svc-t84rb [922.867567ms] Sep 14 12:20:50.454: INFO: Created: latency-svc-qc2wh Sep 14 12:20:50.504: INFO: Got endpoints: latency-svc-qc2wh [968.095169ms] Sep 14 12:20:50.514: INFO: Created: latency-svc-xnjkq Sep 14 12:20:50.536: INFO: Got endpoints: latency-svc-xnjkq [892.906685ms] Sep 14 12:20:50.568: INFO: Created: latency-svc-k5sbs Sep 14 12:20:50.578: INFO: Got endpoints: latency-svc-k5sbs [909.624407ms] Sep 14 12:20:50.703: INFO: Created: latency-svc-48pm6 Sep 14 12:20:50.728: INFO: Got endpoints: latency-svc-48pm6 [995.885373ms] Sep 14 12:20:50.729: INFO: Created: latency-svc-hsdbh Sep 14 12:20:50.760: INFO: Got endpoints: latency-svc-hsdbh [952.475979ms] Sep 14 12:20:50.839: INFO: Created: latency-svc-29zfx Sep 14 12:20:50.867: INFO: Got endpoints: latency-svc-29zfx [1.023678357s] Sep 14 12:20:50.889: INFO: Created: latency-svc-ldnp7 Sep 14 12:20:50.903: INFO: Got endpoints: latency-svc-ldnp7 [1.023428636s] Sep 14 12:20:50.978: INFO: Created: latency-svc-dvdx4 Sep 14 12:20:50.981: INFO: Got endpoints: latency-svc-dvdx4 [1.041113413s] Sep 14 12:20:51.046: INFO: Created: latency-svc-tf6sl Sep 14 12:20:51.060: INFO: Got endpoints: latency-svc-tf6sl [1.08359907s] Sep 14 12:20:51.077: INFO: Created: latency-svc-jwmfj Sep 14 12:20:51.144: INFO: Got endpoints: latency-svc-jwmfj [1.131675337s] Sep 14 12:20:51.145: INFO: Created: latency-svc-qjlpb Sep 14 12:20:51.171: INFO: Got endpoints: latency-svc-qjlpb [1.036568226s] Sep 14 12:20:51.202: INFO: Created: latency-svc-827d7 Sep 14 12:20:51.214: INFO: Got endpoints: latency-svc-827d7 [1.002693555s] Sep 14 12:20:51.260: INFO: Created: latency-svc-kh4vm Sep 14 12:20:51.263: INFO: Got endpoints: latency-svc-kh4vm [1.006631265s] Sep 14 12:20:51.291: INFO: Created: latency-svc-c95wj Sep 14 12:20:51.317: INFO: Got endpoints: latency-svc-c95wj [937.985361ms] Sep 14 12:20:51.348: INFO: Created: latency-svc-ddxfs Sep 14 12:20:51.445: INFO: Got endpoints: latency-svc-ddxfs [1.027758775s] Sep 14 12:20:51.449: INFO: Created: latency-svc-8mf7x Sep 14 12:20:51.461: INFO: Got endpoints: latency-svc-8mf7x [956.091251ms] Sep 14 12:20:51.483: INFO: Created: latency-svc-jgcnq Sep 14 12:20:51.497: INFO: Got endpoints: latency-svc-jgcnq [961.492309ms] Sep 14 12:20:51.531: INFO: Created: latency-svc-62jb4 Sep 14 12:20:51.612: INFO: Got endpoints: latency-svc-62jb4 [1.034626162s] Sep 14 12:20:51.635: INFO: Created: latency-svc-cbr29 Sep 14 12:20:51.648: INFO: Got endpoints: latency-svc-cbr29 [919.901732ms] Sep 14 12:20:51.701: INFO: Created: latency-svc-n2lbd Sep 14 12:20:51.768: INFO: Got endpoints: latency-svc-n2lbd [1.008072375s] Sep 14 12:20:51.795: INFO: Created: latency-svc-b5hpr Sep 14 12:20:51.811: INFO: Got endpoints: latency-svc-b5hpr [943.0563ms] Sep 14 12:20:51.834: INFO: Created: latency-svc-rvj6r Sep 14 12:20:51.840: INFO: Got endpoints: latency-svc-rvj6r [936.488546ms] Sep 14 12:20:51.861: INFO: Created: latency-svc-mppqd Sep 14 12:20:51.911: INFO: Got endpoints: latency-svc-mppqd [930.051773ms] Sep 14 12:20:51.936: INFO: Created: latency-svc-bqbzl Sep 14 12:20:51.965: INFO: Got endpoints: latency-svc-bqbzl [905.42781ms] Sep 14 12:20:51.996: INFO: Created: latency-svc-wvsd2 Sep 14 12:20:52.037: INFO: Got endpoints: latency-svc-wvsd2 [892.606956ms] Sep 14 12:20:52.053: INFO: Created: latency-svc-lbcbz Sep 14 12:20:52.070: INFO: Got endpoints: latency-svc-lbcbz [898.57841ms] Sep 14 12:20:52.106: INFO: Created: latency-svc-d5f2p Sep 14 12:20:52.118: INFO: Got endpoints: latency-svc-d5f2p [904.520153ms] Sep 14 12:20:52.137: INFO: Created: latency-svc-h8n99 Sep 14 12:20:52.205: INFO: Got endpoints: latency-svc-h8n99 [942.400234ms] Sep 14 12:20:52.224: INFO: Created: latency-svc-9xsk2 Sep 14 12:20:52.239: INFO: Got endpoints: latency-svc-9xsk2 [921.80236ms] Sep 14 12:20:52.260: INFO: Created: latency-svc-qfkmt Sep 14 12:20:52.274: INFO: Got endpoints: latency-svc-qfkmt [829.257654ms] Sep 14 12:20:52.343: INFO: Created: latency-svc-p2jtl Sep 14 12:20:52.353: INFO: Got endpoints: latency-svc-p2jtl [892.74884ms] Sep 14 12:20:52.401: INFO: Created: latency-svc-wdgr8 Sep 14 12:20:52.421: INFO: Got endpoints: latency-svc-wdgr8 [923.688498ms] Sep 14 12:20:52.475: INFO: Created: latency-svc-7229f Sep 14 12:20:52.479: INFO: Got endpoints: latency-svc-7229f [866.046941ms] Sep 14 12:20:52.479: INFO: Latencies: [78.627815ms 123.47635ms 142.768275ms 179.151475ms 273.080499ms 300.623677ms 343.09119ms 416.604622ms 469.888416ms 505.878285ms 576.408687ms 618.215097ms 633.680917ms 652.071205ms 652.470029ms 659.890663ms 668.799489ms 676.576974ms 678.30368ms 682.699469ms 685.285515ms 687.830904ms 692.422841ms 693.340294ms 717.790191ms 717.90635ms 722.560537ms 722.644243ms 723.352418ms 729.920438ms 730.115102ms 730.577788ms 731.345754ms 738.900176ms 740.447939ms 741.673784ms 743.271021ms 743.338291ms 755.207388ms 756.352882ms 760.765888ms 767.27537ms 768.155706ms 771.087079ms 773.648106ms 778.307834ms 785.261913ms 793.918593ms 800.866219ms 806.359645ms 812.153576ms 812.218854ms 813.134682ms 813.904186ms 816.826907ms 824.00077ms 826.12775ms 826.504017ms 829.257654ms 829.95645ms 831.252014ms 835.613204ms 837.670493ms 838.354159ms 839.081731ms 842.276861ms 843.674992ms 843.923457ms 847.155741ms 849.174007ms 849.85504ms 850.7758ms 852.037182ms 854.044645ms 858.67466ms 859.43339ms 861.318928ms 866.046941ms 866.960252ms 867.981844ms 868.213124ms 869.629286ms 870.421743ms 871.890198ms 872.942368ms 873.483743ms 874.232342ms 874.776013ms 875.206694ms 876.992494ms 878.299721ms 878.84703ms 879.201261ms 879.251616ms 879.427432ms 879.755722ms 881.323001ms 883.514294ms 884.788932ms 885.035625ms 885.62546ms 885.750684ms 887.163976ms 890.85587ms 891.052682ms 891.339534ms 892.606956ms 892.74884ms 892.906685ms 894.034768ms 894.162545ms 894.987004ms 895.856562ms 897.660256ms 897.982868ms 898.175771ms 898.57841ms 898.609071ms 898.871949ms 899.132579ms 900.092311ms 904.520153ms 904.934854ms 905.42781ms 908.7458ms 908.754355ms 908.816754ms 909.328979ms 909.624407ms 909.648151ms 914.660203ms 914.855426ms 917.583818ms 918.479657ms 919.901732ms 920.194622ms 921.117151ms 921.399339ms 921.620016ms 921.80236ms 921.951856ms 922.867567ms 923.688498ms 927.979222ms 929.890096ms 930.051773ms 932.487969ms 933.033501ms 936.488546ms 936.572256ms 937.329705ms 937.985361ms 938.263442ms 939.745154ms 942.143876ms 942.400234ms 943.0563ms 944.009224ms 944.168885ms 951.028374ms 952.221406ms 952.475979ms 956.091251ms 961.492309ms 962.673274ms 964.248865ms 965.138718ms 966.415443ms 968.095169ms 971.885782ms 982.107326ms 983.667882ms 986.420032ms 986.822952ms 995.885373ms 999.556351ms 1.000196046s 1.002693555s 1.006631265s 1.008072375s 1.017216662s 1.017961385s 1.022959282s 1.023428636s 1.023678357s 1.027758775s 1.033809888s 1.034237953s 1.034280674s 1.034626162s 1.036568226s 1.041113413s 1.045577381s 1.051491651s 1.059243924s 1.071232192s 1.077699299s 1.077992519s 1.08359907s 1.131675337s] Sep 14 12:20:52.479: INFO: 50 %ile: 885.62546ms Sep 14 12:20:52.479: INFO: 90 %ile: 1.017216662s Sep 14 12:20:52.479: INFO: 99 %ile: 1.08359907s Sep 14 12:20:52.479: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:20:52.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7723" for this suite. • [SLOW TEST:15.179 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":81,"skipped":1129,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:20:52.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 14 12:20:52.614: INFO: Waiting up to 5m0s for pod "client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b" in namespace "containers-9089" to be "Succeeded or Failed" Sep 14 12:20:52.630: INFO: Pod "client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.120886ms Sep 14 12:20:54.635: INFO: Pod "client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020393972s Sep 14 12:20:57.553: INFO: Pod "client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.938737184s STEP: Saw pod success Sep 14 12:20:57.553: INFO: Pod "client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b" satisfied condition "Succeeded or Failed" Sep 14 12:20:57.556: INFO: Trying to get logs from node latest-worker2 pod client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b container test-container: STEP: delete the pod Sep 14 12:20:57.818: INFO: Waiting for pod client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b to disappear Sep 14 12:20:57.828: INFO: Pod client-containers-1aa7749b-b4e5-478c-9193-40a8bd2bdf9b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:20:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9089" for this suite. • [SLOW TEST:5.329 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1143,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:20:57.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-c8099e27-016e-4dcd-8ee5-2550fff0fb80 STEP: Creating configMap with name cm-test-opt-upd-c8db6caf-5349-4a3f-ab0e-b2fe557ffcc7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c8099e27-016e-4dcd-8ee5-2550fff0fb80 STEP: Updating configmap cm-test-opt-upd-c8db6caf-5349-4a3f-ab0e-b2fe557ffcc7 STEP: Creating configMap with name cm-test-opt-create-e421936d-b256-4b96-af80-775fa9406f46 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:21:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8327" for this suite. • [SLOW TEST:10.616 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:21:08.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:21:08.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9934" for this suite. STEP: Destroying namespace "nspatchtest-f89da002-3c4d-4d70-a116-d96de1643b3d-8398" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":84,"skipped":1178,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:21:08.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 14 12:21:08.957: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 14 12:21:09.028: INFO: Waiting for terminating namespaces to be deleted... Sep 14 12:21:09.032: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 14 12:21:09.103: INFO: coredns-f9fd979d6-rckh5 from kube-system started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.103: INFO: Container coredns ready: true, restart count 0 Sep 14 12:21:09.103: INFO: coredns-f9fd979d6-rtr7c from kube-system started at 2020-09-13 17:00:07 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.103: INFO: Container coredns ready: true, restart count 0 Sep 14 12:21:09.103: INFO: kindnet-x9kfh from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.103: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:21:09.103: INFO: kube-proxy-484ff from kube-system started at 2020-09-13 16:59:36 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.103: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:21:09.103: INFO: local-path-provisioner-78776bfc44-ks8gr from local-path-storage started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.103: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 14 12:21:09.103: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 14 12:21:09.119: INFO: kindnet-6mthj from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.119: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:21:09.119: INFO: kube-proxy-thrnr from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:21:09.119: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:21:09.119: INFO: pod-projected-configmaps-4c19f28f-070c-480e-ac17-8e581fc778b0 from projected-8327 started at 2020-09-14 12:20:58 +0000 UTC (3 container statuses recorded) Sep 14 12:21:09.119: INFO: Container createcm-volume-test ready: true, restart count 0 Sep 14 12:21:09.119: INFO: Container delcm-volume-test ready: true, restart count 0 Sep 14 12:21:09.119: INFO: Container updcm-volume-test ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1634a5cd667eb227], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:21:10.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7916" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":85,"skipped":1181,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:21:10.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 14 12:21:17.868: INFO: 10 pods remaining Sep 14 12:21:17.868: INFO: 0 pods has nil DeletionTimestamp Sep 14 12:21:17.868: INFO: Sep 14 12:21:19.386: INFO: 0 pods remaining Sep 14 12:21:19.386: INFO: 0 pods has nil DeletionTimestamp Sep 14 12:21:19.386: INFO: STEP: Gathering metrics W0914 12:21:21.053025 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 12:22:23.316: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:22:23.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9908" for this suite. • [SLOW TEST:73.062 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":86,"skipped":1194,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:22:23.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:22:23.406: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1f688871-e5a7-4582-a11e-cedbfd972e6e" in namespace "security-context-test-4313" to be "Succeeded or Failed" Sep 14 12:22:23.423: INFO: Pod "busybox-user-65534-1f688871-e5a7-4582-a11e-cedbfd972e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.669858ms Sep 14 12:22:25.492: INFO: Pod "busybox-user-65534-1f688871-e5a7-4582-a11e-cedbfd972e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085917355s Sep 14 12:22:27.496: INFO: Pod "busybox-user-65534-1f688871-e5a7-4582-a11e-cedbfd972e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090010624s Sep 14 12:22:27.496: INFO: Pod "busybox-user-65534-1f688871-e5a7-4582-a11e-cedbfd972e6e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:22:27.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4313" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1200,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:22:27.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6594650d-5678-42e5-b3c1-c11473f417ef STEP: Creating a pod to test consume configMaps Sep 14 12:22:27.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd" in namespace "configmap-9764" to be "Succeeded or Failed" Sep 14 12:22:27.570: INFO: Pod "pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43038ms Sep 14 12:22:29.601: INFO: Pod "pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035848624s Sep 14 12:22:31.605: INFO: Pod "pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039382858s STEP: Saw pod success Sep 14 12:22:31.605: INFO: Pod "pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd" satisfied condition "Succeeded or Failed" Sep 14 12:22:31.607: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd container configmap-volume-test: STEP: delete the pod Sep 14 12:22:31.644: INFO: Waiting for pod pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd to disappear Sep 14 12:22:31.667: INFO: Pod pod-configmaps-f4f03c83-e014-495a-8a9f-4536afb274fd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:22:31.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9764" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:22:31.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:22:42.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7034" for this suite. • [SLOW TEST:11.240 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":89,"skipped":1225,"failed":0} SS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:22:42.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:22:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9748" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":90,"skipped":1227,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:22:43.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 14 12:22:43.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4792' Sep 14 12:22:46.362: INFO: stderr: "" Sep 14 12:22:46.362: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 14 12:22:46.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:22:46.466: INFO: stderr: "" Sep 14 12:22:46.467: INFO: stdout: "update-demo-nautilus-rssp9 update-demo-nautilus-wmhr4 " Sep 14 12:22:46.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:22:46.584: INFO: stderr: "" Sep 14 12:22:46.585: INFO: stdout: "" Sep 14 12:22:46.585: INFO: update-demo-nautilus-rssp9 is created but not running Sep 14 12:22:51.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:22:51.683: INFO: stderr: "" Sep 14 12:22:51.683: INFO: stdout: "update-demo-nautilus-rssp9 update-demo-nautilus-wmhr4 " Sep 14 12:22:51.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:22:51.780: INFO: stderr: "" Sep 14 12:22:51.780: INFO: stdout: "true" Sep 14 12:22:51.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:22:51.877: INFO: stderr: "" Sep 14 12:22:51.877: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:22:51.877: INFO: validating pod update-demo-nautilus-rssp9 Sep 14 12:22:51.881: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:22:51.881: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:22:51.881: INFO: update-demo-nautilus-rssp9 is verified up and running Sep 14 12:22:51.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmhr4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:22:51.978: INFO: stderr: "" Sep 14 12:22:51.978: INFO: stdout: "true" Sep 14 12:22:51.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wmhr4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:22:52.073: INFO: stderr: "" Sep 14 12:22:52.073: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:22:52.073: INFO: validating pod update-demo-nautilus-wmhr4 Sep 14 12:22:52.077: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:22:52.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:22:52.077: INFO: update-demo-nautilus-wmhr4 is verified up and running STEP: scaling down the replication controller Sep 14 12:22:52.081: INFO: scanned /root for discovery docs: Sep 14 12:22:52.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4792' Sep 14 12:22:53.260: INFO: stderr: "" Sep 14 12:22:53.260: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 14 12:22:53.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:22:53.363: INFO: stderr: "" Sep 14 12:22:53.363: INFO: stdout: "update-demo-nautilus-rssp9 update-demo-nautilus-wmhr4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 14 12:22:58.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:22:58.468: INFO: stderr: "" Sep 14 12:22:58.468: INFO: stdout: "update-demo-nautilus-rssp9 update-demo-nautilus-wmhr4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 14 12:23:03.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:23:03.608: INFO: stderr: "" Sep 14 12:23:03.608: INFO: stdout: "update-demo-nautilus-rssp9 update-demo-nautilus-wmhr4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 14 12:23:08.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:23:08.710: INFO: stderr: "" Sep 14 12:23:08.711: INFO: stdout: "update-demo-nautilus-rssp9 " Sep 14 12:23:08.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:08.816: INFO: stderr: "" Sep 14 12:23:08.816: INFO: stdout: "true" Sep 14 12:23:08.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:08.917: INFO: stderr: "" Sep 14 12:23:08.917: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:23:08.917: INFO: validating pod update-demo-nautilus-rssp9 Sep 14 12:23:08.921: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:23:08.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:23:08.921: INFO: update-demo-nautilus-rssp9 is verified up and running STEP: scaling up the replication controller Sep 14 12:23:08.922: INFO: scanned /root for discovery docs: Sep 14 12:23:08.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4792' Sep 14 12:23:10.062: INFO: stderr: "" Sep 14 12:23:10.062: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 14 12:23:10.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:23:10.179: INFO: stderr: "" Sep 14 12:23:10.179: INFO: stdout: "update-demo-nautilus-cpr8x update-demo-nautilus-rssp9 " Sep 14 12:23:10.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:10.287: INFO: stderr: "" Sep 14 12:23:10.287: INFO: stdout: "" Sep 14 12:23:10.287: INFO: update-demo-nautilus-cpr8x is created but not running Sep 14 12:23:15.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4792' Sep 14 12:23:15.401: INFO: stderr: "" Sep 14 12:23:15.401: INFO: stdout: "update-demo-nautilus-cpr8x update-demo-nautilus-rssp9 " Sep 14 12:23:15.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:15.502: INFO: stderr: "" Sep 14 12:23:15.502: INFO: stdout: "true" Sep 14 12:23:15.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpr8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:15.600: INFO: stderr: "" Sep 14 12:23:15.600: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:23:15.600: INFO: validating pod update-demo-nautilus-cpr8x Sep 14 12:23:15.605: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:23:15.605: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:23:15.605: INFO: update-demo-nautilus-cpr8x is verified up and running Sep 14 12:23:15.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:15.703: INFO: stderr: "" Sep 14 12:23:15.703: INFO: stdout: "true" Sep 14 12:23:15.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rssp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4792' Sep 14 12:23:15.805: INFO: stderr: "" Sep 14 12:23:15.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:23:15.805: INFO: validating pod update-demo-nautilus-rssp9 Sep 14 12:23:15.809: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:23:15.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:23:15.809: INFO: update-demo-nautilus-rssp9 is verified up and running STEP: using delete to clean up resources Sep 14 12:23:15.809: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4792' Sep 14 12:23:15.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 12:23:15.927: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 14 12:23:15.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4792' Sep 14 12:23:16.034: INFO: stderr: "No resources found in kubectl-4792 namespace.\n" Sep 14 12:23:16.035: INFO: stdout: "" Sep 14 12:23:16.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4792 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 14 12:23:16.141: INFO: stderr: "" Sep 14 12:23:16.141: INFO: stdout: "update-demo-nautilus-cpr8x\nupdate-demo-nautilus-rssp9\n" Sep 14 12:23:16.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4792' Sep 14 12:23:16.744: INFO: stderr: "No resources found in kubectl-4792 namespace.\n" Sep 14 12:23:16.744: INFO: stdout: "" Sep 14 12:23:16.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4792 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 14 12:23:16.846: INFO: stderr: "" Sep 14 12:23:16.846: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:23:16.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4792" for this suite. • [SLOW TEST:33.766 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":91,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:23:16.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 14 12:23:21.395: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:23:21.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4162" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":92,"skipped":1246,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:23:21.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:23:21.537: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f1227a17-17f5-4136-a00e-3e1e19199490" in namespace "security-context-test-3228" to be "Succeeded or Failed" Sep 14 12:23:21.596: INFO: Pod "busybox-readonly-false-f1227a17-17f5-4136-a00e-3e1e19199490": Phase="Pending", Reason="", readiness=false. Elapsed: 58.379452ms Sep 14 12:23:23.600: INFO: Pod "busybox-readonly-false-f1227a17-17f5-4136-a00e-3e1e19199490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06313131s Sep 14 12:23:25.604: INFO: Pod "busybox-readonly-false-f1227a17-17f5-4136-a00e-3e1e19199490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067073015s Sep 14 12:23:25.604: INFO: Pod "busybox-readonly-false-f1227a17-17f5-4136-a00e-3e1e19199490" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:23:25.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3228" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1254,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:23:25.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:23:57.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7723" for this suite. • [SLOW TEST:31.477 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1256,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:23:57.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:23:57.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1767" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":95,"skipped":1271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:23:57.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:23:57.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:23:59.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683037, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683037, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683037, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683037, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:24:02.762: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:24:02.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7328" for this suite. STEP: Destroying namespace "webhook-7328-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.725 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":96,"skipped":1306,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:24:02.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-zlkd STEP: Creating a pod to test atomic-volume-subpath Sep 14 12:24:03.180: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zlkd" in namespace "subpath-9755" to be "Succeeded or Failed" Sep 14 12:24:03.190: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.369367ms Sep 14 12:24:05.195: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014666685s Sep 14 12:24:07.199: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 4.019274663s Sep 14 12:24:09.203: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 6.023496218s Sep 14 12:24:11.209: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 8.028756636s Sep 14 12:24:13.213: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 10.032876475s Sep 14 12:24:15.217: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 12.036722629s Sep 14 12:24:17.220: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 14.040444607s Sep 14 12:24:19.254: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 16.074055072s Sep 14 12:24:21.258: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 18.07785179s Sep 14 12:24:23.279: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 20.098962183s Sep 14 12:24:25.283: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Running", Reason="", readiness=true. Elapsed: 22.103581079s Sep 14 12:24:27.287: INFO: Pod "pod-subpath-test-downwardapi-zlkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.106978797s STEP: Saw pod success Sep 14 12:24:27.287: INFO: Pod "pod-subpath-test-downwardapi-zlkd" satisfied condition "Succeeded or Failed" Sep 14 12:24:27.290: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-zlkd container test-container-subpath-downwardapi-zlkd: STEP: delete the pod Sep 14 12:24:27.379: INFO: Waiting for pod pod-subpath-test-downwardapi-zlkd to disappear Sep 14 12:24:27.383: INFO: Pod pod-subpath-test-downwardapi-zlkd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-zlkd Sep 14 12:24:27.383: INFO: Deleting pod "pod-subpath-test-downwardapi-zlkd" in namespace "subpath-9755" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:24:27.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9755" for this suite. • [SLOW TEST:24.737 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":97,"skipped":1311,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:24:27.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 14 12:24:27.864: INFO: Waiting up to 5m0s for pod "pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7" in namespace "emptydir-5667" to be "Succeeded or Failed" Sep 14 12:24:27.867: INFO: Pod "pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649768ms Sep 14 12:24:29.871: INFO: Pod "pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673051s Sep 14 12:24:31.875: INFO: Pod "pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011695728s STEP: Saw pod success Sep 14 12:24:31.875: INFO: Pod "pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7" satisfied condition "Succeeded or Failed" Sep 14 12:24:31.879: INFO: Trying to get logs from node latest-worker2 pod pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7 container test-container: STEP: delete the pod Sep 14 12:24:31.933: INFO: Waiting for pod pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7 to disappear Sep 14 12:24:31.939: INFO: Pod pod-c157540d-9ca0-4da0-a2ca-8040d79b6bd7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:24:31.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5667" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":98,"skipped":1322,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:24:31.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:24:31.990: INFO: Creating deployment "webserver-deployment" Sep 14 12:24:31.998: INFO: Waiting for observed generation 1 Sep 14 12:24:34.042: INFO: Waiting for all required pods to come up Sep 14 12:24:34.047: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 14 12:24:44.163: INFO: Waiting for deployment "webserver-deployment" to complete Sep 14 12:24:44.169: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 14 12:24:44.177: INFO: Updating deployment webserver-deployment Sep 14 12:24:44.177: INFO: Waiting for observed generation 2 Sep 14 12:24:46.973: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 14 12:24:47.190: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 14 12:24:47.193: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 14 12:24:47.209: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 14 12:24:47.209: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 14 12:24:47.231: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 14 12:24:47.235: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 14 12:24:47.235: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 14 12:24:47.242: INFO: Updating deployment webserver-deployment Sep 14 12:24:47.242: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 14 12:24:48.023: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 14 12:24:48.063: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 14 12:24:51.180: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8242 /apis/apps/v1/namespaces/deployment-8242/deployments/webserver-deployment 8a7df752-3aa7-4438-a005-db017c4d22e6 264001 3 2020-09-14 12:24:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-14 12:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003716b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-14 12:24:47 +0000 UTC,LastTransitionTime:2020-09-14 12:24:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-14 12:24:48 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 14 12:24:51.202: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8242 /apis/apps/v1/namespaces/deployment-8242/replicasets/webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 263995 3 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8a7df752-3aa7-4438-a005-db017c4d22e6 0xc003717027 0xc003717028}] [] [{kube-controller-manager Update apps/v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a7df752-3aa7-4438-a005-db017c4d22e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037170a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:24:51.202: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 14 12:24:51.202: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-8242 /apis/apps/v1/namespaces/deployment-8242/replicasets/webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 263989 3 2020-09-14 12:24:31 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8a7df752-3aa7-4438-a005-db017c4d22e6 0xc003717107 0xc003717108}] [] [{kube-controller-manager Update apps/v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8a7df752-3aa7-4438-a005-db017c4d22e6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003717178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:24:51.543: INFO: Pod "webserver-deployment-795d758f88-45krm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-45krm webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-45krm 81d64075-02e6-49b0-b4f6-8204056ad2d1 264007 0 2020-09-14 12:24:47 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0037176c7 0xc0037176c8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.543: INFO: Pod "webserver-deployment-795d758f88-5ddkd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5ddkd webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-5ddkd e035ee29-a733-41b8-9ad7-2bace7ce9423 264054 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc003717880 0xc003717881}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.544: INFO: Pod "webserver-deployment-795d758f88-5mcfm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5mcfm webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-5mcfm 9c38daa3-f8fb-4d84-8064-d9306ecd587e 264067 0 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc003717a40 0xc003717a41}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.187,StartTime:2020-09-14 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.544: INFO: Pod "webserver-deployment-795d758f88-h4bm9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-h4bm9 webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-h4bm9 87575b20-993a-47c5-8371-754a2f51b43d 264023 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc003717c10 0xc003717c11}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.545: INFO: Pod "webserver-deployment-795d758f88-hvm44" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hvm44 webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-hvm44 8ba62b0a-fa8b-4fad-829c-a8c4c932ef6b 264012 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc003717df0 0xc003717df1}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.545: INFO: Pod "webserver-deployment-795d758f88-ll4gv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ll4gv webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-ll4gv 33ae36ca-44a6-4cd2-95d9-4f95f4b20230 264035 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc003717f90 0xc003717f91}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.545: INFO: Pod "webserver-deployment-795d758f88-qmlqt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qmlqt webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-qmlqt 10d32929-5ebb-46f6-9537-e940932ad2c7 263902 0 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da130 0xc0044da131}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.546: INFO: Pod "webserver-deployment-795d758f88-qz7tm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qz7tm webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-qz7tm 7f3b55ca-17a6-4ba1-9495-f28a2098d626 264048 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da2f0 0xc0044da2f1}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.546: INFO: Pod "webserver-deployment-795d758f88-strd8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-strd8 webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-strd8 c52ff330-8ff1-4f51-8d1a-1ae0427f86df 263922 0 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da490 0xc0044da491}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.546: INFO: Pod "webserver-deployment-795d758f88-tmxxj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tmxxj webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-tmxxj 2e33b153-56c7-48a6-92a7-5159f23a9164 263910 0 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da630 0xc0044da631}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.546: INFO: Pod "webserver-deployment-795d758f88-tp8xf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tp8xf webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-tp8xf 8776352e-796b-46fe-a460-a30f0eced76c 264049 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da7d0 0xc0044da7d1}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.547: INFO: Pod "webserver-deployment-795d758f88-v879f" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v879f webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-v879f 2effba0e-a610-444a-aef4-c33afd6c68cb 264060 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044da970 0xc0044da971}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.547: INFO: Pod "webserver-deployment-795d758f88-xtspv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xtspv webserver-deployment-795d758f88- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-795d758f88-xtspv c7f13d4b-d9da-474f-aeee-3ff5d66c119e 264068 0 2020-09-14 12:24:44 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 24894a30-97d5-4034-8bc8-7e69ef9bd694 0xc0044dab20 0xc0044dab21}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24894a30-97d5-4034-8bc8-7e69ef9bd694\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.98,StartTime:2020-09-14 12:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.547: INFO: Pod "webserver-deployment-dd94f59b7-4f4q2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4f4q2 webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-4f4q2 d0034640-b9c0-4a43-af01-e34ed15d2afa 264053 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044dad20 0xc0044dad21}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.548: INFO: Pod "webserver-deployment-dd94f59b7-52dlv" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-52dlv webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-52dlv 4949a6e4-9b0a-4e8c-924e-5c0f387bd560 263856 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044daeb7 0xc0044daeb8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.95\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.95,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://947b0d759770d3655c06b82f036c63ac8ea4d0ee0cdd51eacde3af36888c3df9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.548: INFO: Pod "webserver-deployment-dd94f59b7-57q6q" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-57q6q webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-57q6q 383c3573-c5cb-4f62-b476-1a27a0792c60 264059 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db067 0xc0044db068}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.548: INFO: Pod "webserver-deployment-dd94f59b7-5jglm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5jglm webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-5jglm e40bccaf-bf08-4c29-82d2-006b2e4fb53e 263842 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db1f7 0xc0044db1f8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.94,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://de5ebde56d71b073e412a0f368033cc52fc35470a64c9c2211f0208aa26a9930,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-bvwft" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bvwft webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-bvwft 7fe7aa9f-3434-459d-8815-5f1569ca3247 264027 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db3a7 0xc0044db3a8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-d52lq" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d52lq webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-d52lq 64dd8dff-d47f-42fd-96d4-3e18c657dccc 263851 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db537 0xc0044db538}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.183,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ae81544bb83e6dd168f8a2420383459c148b74c02dffd8f814c74544df3b6f31,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-hxm66" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hxm66 webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-hxm66 bda4fe70-31ae-4b11-babf-d789e335bfe6 263849 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db6f7 0xc0044db6f8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.96,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1e1c15a2cded097d45592415393eb2843af791edfc43406814adedc28a8b8626,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-hzx7z" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hzx7z webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-hzx7z e1f96578-ff40-41c3-997d-06504faedeb4 263817 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044db8a7 0xc0044db8a8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.182,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec51dec7d6bf5f3d93851ea7ea6551226da7efca2be7d8ba50ab37afadc15c1a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-kv5q9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kv5q9 webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-kv5q9 39d3e763-4456-4c22-b02d-66e6af599e8b 263997 0 2020-09-14 12:24:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044dba57 0xc0044dba58}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.549: INFO: Pod "webserver-deployment-dd94f59b7-l59nx" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l59nx webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-l59nx 026c1807-a427-4066-8210-bd300cd0cbcf 263844 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044dbbe7 0xc0044dbbe8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.184,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://43877e083bdceef8dd249ca440607c52af14a42a322cf93345edb6281f3d205c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-lv6cs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lv6cs webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-lv6cs 6c4e4d90-1301-49f8-ab03-7d30e860bcb7 264030 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044dbd97 0xc0044dbd98}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-p5mrg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-p5mrg webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-p5mrg ac83c7c6-52c4-4455-a6c6-4d72f0f4c38f 264006 0 2020-09-14 12:24:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc0044dbf27 0xc0044dbf28}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-pnlhx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pnlhx webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-pnlhx 87360b4a-995a-4eec-b9db-2227a93e10bf 264040 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a0b7 0xc00435a0b8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-pswpk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pswpk webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-pswpk aaa706ee-a24a-4378-8ee8-6eee1ba63670 264033 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a247 0xc00435a248}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-t4snj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t4snj webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-t4snj cd0ee726-af18-4d93-b441-d487923a7752 264044 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a3e7 0xc00435a3e8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.550: INFO: Pod "webserver-deployment-dd94f59b7-t8dh8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t8dh8 webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-t8dh8 66465eba-47bd-4e79-9c67-02cc95206b73 263857 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a577 0xc00435a578}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.185\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.185,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f2660c5440041371de52565af27672385a4972f7aa501b50d71e24df613a8d2b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.551: INFO: Pod "webserver-deployment-dd94f59b7-tnsdk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tnsdk webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-tnsdk 163afd03-845d-4cc5-a543-927ab9a66d4b 263803 0 2020-09-14 12:24:32 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a727 0xc00435a728}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.93,StartTime:2020-09-14 12:24:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:24:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://36c72552d7150e63e40c1b98dfb5de1eb4fa568705878c32441b13292e875e23,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.551: INFO: Pod "webserver-deployment-dd94f59b7-vmjnz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vmjnz webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-vmjnz f19fa59a-1979-42ec-9ddd-42eb48766d9f 264015 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435a8d7 0xc00435a8d8}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.551: INFO: Pod "webserver-deployment-dd94f59b7-x5n55" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x5n55 webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-x5n55 1379ec7c-d6b0-46b6-8de8-eab3fe823191 264020 0 2020-09-14 12:24:48 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435aa77 0xc00435aa78}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 12:24:51.551: INFO: Pod "webserver-deployment-dd94f59b7-z5vkq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z5vkq webserver-deployment-dd94f59b7- deployment-8242 /api/v1/namespaces/deployment-8242/pods/webserver-deployment-dd94f59b7-z5vkq bdfedac1-6b45-4c6c-b792-92835433d4b3 263998 0 2020-09-14 12:24:47 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 74b0ff97-db3e-43c8-bb32-962cae6fec70 0xc00435ac07 0xc00435ac08}] [] [{kube-controller-manager Update v1 2020-09-14 12:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74b0ff97-db3e-43c8-bb32-962cae6fec70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:24:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mgfdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mgfdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mgfdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:24:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 12:24:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:24:51.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8242" for this suite. • [SLOW TEST:19.997 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":99,"skipped":1332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:24:51.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:24:52.492: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:25:07.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1372" for this suite. • [SLOW TEST:15.892 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1412,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:25:07.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 14 12:25:08.505: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 14 12:25:13.986: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:25:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9106" for this suite. • [SLOW TEST:7.290 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":101,"skipped":1423,"failed":0} S ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:25:15.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 14 12:25:16.285: INFO: Major version: 1 STEP: Confirm minor version Sep 14 12:25:16.285: INFO: cleanMinorVersion: 19 Sep 14 12:25:16.285: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:25:16.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9890" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":102,"skipped":1424,"failed":0} ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:25:16.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 14 12:25:17.903: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264465 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:25:17.903: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264465 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 14 12:25:27.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264543 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:25:27.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264543 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:27 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 14 12:25:37.947: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264573 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:25:37.948: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264573 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 14 12:25:47.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264604 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:25:47.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-a 0b8fdcf5-4f16-4ede-9508-d3737be815ad 264604 0 2020-09-14 12:25:17 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:37 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 14 12:25:57.967: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-b 314ad1a5-efa1-420f-b5e5-fac6e0dcfbd8 264637 0 2020-09-14 12:25:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:25:57.967: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-b 314ad1a5-efa1-420f-b5e5-fac6e0dcfbd8 264637 0 2020-09-14 12:25:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 14 12:26:08.133: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-b 314ad1a5-efa1-420f-b5e5-fac6e0dcfbd8 264668 0 2020-09-14 12:25:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 12:26:08.134: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5286 /api/v1/namespaces/watch-5286/configmaps/e2e-watch-test-configmap-b 314ad1a5-efa1-420f-b5e5-fac6e0dcfbd8 264668 0 2020-09-14 12:25:57 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-14 12:25:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:26:18.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5286" for this suite. • [SLOW TEST:61.291 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":103,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:26:18.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0914 12:26:19.263943 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 12:27:21.285: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:27:21.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2479" for this suite. • [SLOW TEST:63.147 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":104,"skipped":1461,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:27:21.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1685 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-1685 Sep 14 12:27:21.406: INFO: Found 0 stateful pods, waiting for 1 Sep 14 12:27:31.410: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 12:27:31.453: INFO: Deleting all statefulset in ns statefulset-1685 Sep 14 12:27:31.495: INFO: Scaling statefulset ss to 0 Sep 14 12:27:41.632: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:27:41.634: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:27:41.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1685" for this suite. • [SLOW TEST:20.405 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":105,"skipped":1470,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:27:41.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 14 12:27:41.813: INFO: starting watch STEP: patching STEP: updating Sep 14 12:27:41.821: INFO: waiting for watch events with expected annotations Sep 14 12:27:41.821: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:27:41.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4344" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":106,"skipped":1481,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:27:41.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52 Sep 14 12:27:41.983: INFO: Pod name my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52: Found 0 pods out of 1 Sep 14 12:27:46.986: INFO: Pod name my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52: Found 1 pods out of 1 Sep 14 12:27:46.986: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52" are running Sep 14 12:27:46.989: INFO: Pod "my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52-kr8ws" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 12:27:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 12:27:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 12:27:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 12:27:41 +0000 UTC Reason: Message:}]) Sep 14 12:27:46.989: INFO: Trying to dial the pod Sep 14 12:27:52.003: INFO: Controller my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52: Got expected result from replica 1 [my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52-kr8ws]: "my-hostname-basic-fb04e018-5b2f-4f8e-9cf5-d0a6c8550f52-kr8ws", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:27:52.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-384" for this suite. • [SLOW TEST:10.151 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":107,"skipped":1482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:27:52.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 14 12:28:00.196: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 14 12:28:00.214: INFO: Pod pod-with-poststart-exec-hook still exists Sep 14 12:28:02.214: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 14 12:28:02.220: INFO: Pod pod-with-poststart-exec-hook still exists Sep 14 12:28:04.214: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 14 12:28:04.218: INFO: Pod pod-with-poststart-exec-hook still exists Sep 14 12:28:06.214: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 14 12:28:06.218: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:28:06.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5317" for this suite. • [SLOW TEST:14.214 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1507,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:28:06.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 14 12:28:06.280: INFO: namespace kubectl-9411 Sep 14 12:28:06.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9411' Sep 14 12:28:07.010: INFO: stderr: "" Sep 14 12:28:07.010: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 14 12:28:08.046: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:28:08.046: INFO: Found 0 / 1 Sep 14 12:28:09.015: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:28:09.015: INFO: Found 0 / 1 Sep 14 12:28:10.014: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:28:10.014: INFO: Found 0 / 1 Sep 14 12:28:11.016: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:28:11.016: INFO: Found 1 / 1 Sep 14 12:28:11.016: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 14 12:28:11.019: INFO: Selector matched 1 pods for map[app:agnhost] Sep 14 12:28:11.019: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 14 12:28:11.019: INFO: wait on agnhost-primary startup in kubectl-9411 Sep 14 12:28:11.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config logs agnhost-primary-hw6bq agnhost-primary --namespace=kubectl-9411' Sep 14 12:28:11.171: INFO: stderr: "" Sep 14 12:28:11.171: INFO: stdout: "Paused\n" STEP: exposing RC Sep 14 12:28:11.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9411' Sep 14 12:28:11.340: INFO: stderr: "" Sep 14 12:28:11.340: INFO: stdout: "service/rm2 exposed\n" Sep 14 12:28:11.371: INFO: Service rm2 in namespace kubectl-9411 found. STEP: exposing service Sep 14 12:28:13.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9411' Sep 14 12:28:13.518: INFO: stderr: "" Sep 14 12:28:13.518: INFO: stdout: "service/rm3 exposed\n" Sep 14 12:28:13.567: INFO: Service rm3 in namespace kubectl-9411 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:28:15.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9411" for this suite. • [SLOW TEST:9.364 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":109,"skipped":1508,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:28:15.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2823 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2823 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2823 Sep 14 12:28:15.674: INFO: Found 0 stateful pods, waiting for 1 Sep 14 12:28:25.678: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 14 12:28:25.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:28:25.951: INFO: stderr: "I0914 12:28:25.823771 1335 log.go:181] (0xc000c893f0) (0xc0005aa8c0) Create stream\nI0914 12:28:25.823823 1335 log.go:181] (0xc000c893f0) (0xc0005aa8c0) Stream added, broadcasting: 1\nI0914 12:28:25.830049 1335 log.go:181] (0xc000c893f0) Reply frame received for 1\nI0914 12:28:25.830092 1335 log.go:181] (0xc000c893f0) (0xc000637f40) Create stream\nI0914 12:28:25.830107 1335 log.go:181] (0xc000c893f0) (0xc000637f40) Stream added, broadcasting: 3\nI0914 12:28:25.831048 1335 log.go:181] (0xc000c893f0) Reply frame received for 3\nI0914 12:28:25.831099 1335 log.go:181] (0xc000c893f0) (0xc0005aa000) Create stream\nI0914 12:28:25.831117 1335 log.go:181] (0xc000c893f0) (0xc0005aa000) Stream added, broadcasting: 5\nI0914 12:28:25.831972 1335 log.go:181] (0xc000c893f0) Reply frame received for 5\nI0914 12:28:25.900759 1335 log.go:181] (0xc000c893f0) Data frame received for 5\nI0914 12:28:25.900788 1335 log.go:181] (0xc0005aa000) (5) Data frame handling\nI0914 12:28:25.900803 1335 log.go:181] (0xc0005aa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:28:25.942203 1335 log.go:181] (0xc000c893f0) Data frame received for 3\nI0914 12:28:25.942228 1335 log.go:181] (0xc000637f40) (3) Data frame handling\nI0914 12:28:25.942246 1335 log.go:181] (0xc000637f40) (3) Data frame sent\nI0914 12:28:25.942254 1335 log.go:181] (0xc000c893f0) Data frame received for 3\nI0914 12:28:25.942260 1335 log.go:181] (0xc000637f40) (3) Data frame handling\nI0914 12:28:25.942527 1335 log.go:181] (0xc000c893f0) Data frame received for 5\nI0914 12:28:25.942541 1335 log.go:181] (0xc0005aa000) (5) Data frame handling\nI0914 12:28:25.944823 1335 log.go:181] (0xc000c893f0) Data frame received for 1\nI0914 12:28:25.944864 1335 log.go:181] (0xc0005aa8c0) (1) Data frame handling\nI0914 12:28:25.944893 1335 log.go:181] (0xc0005aa8c0) (1) Data frame sent\nI0914 12:28:25.944917 1335 log.go:181] (0xc000c893f0) (0xc0005aa8c0) Stream removed, broadcasting: 1\nI0914 12:28:25.944947 1335 log.go:181] (0xc000c893f0) Go away received\nI0914 12:28:25.945536 1335 log.go:181] (0xc000c893f0) (0xc0005aa8c0) Stream removed, broadcasting: 1\nI0914 12:28:25.945578 1335 log.go:181] (0xc000c893f0) (0xc000637f40) Stream removed, broadcasting: 3\nI0914 12:28:25.945600 1335 log.go:181] (0xc000c893f0) (0xc0005aa000) Stream removed, broadcasting: 5\n" Sep 14 12:28:25.951: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:28:25.951: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:28:25.975: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 14 12:28:35.981: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:28:35.981: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:28:35.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999547s Sep 14 12:28:37.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994798959s Sep 14 12:28:38.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.908462615s Sep 14 12:28:39.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.903554391s Sep 14 12:28:40.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.899476785s Sep 14 12:28:41.103: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.89472876s Sep 14 12:28:42.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.889782379s Sep 14 12:28:43.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.885617655s Sep 14 12:28:44.118: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.87991563s Sep 14 12:28:45.122: INFO: Verifying statefulset ss doesn't scale past 1 for another 874.602811ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2823 Sep 14 12:28:46.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:28:46.367: INFO: stderr: "I0914 12:28:46.273384 1353 log.go:181] (0xc0005ecf20) (0xc000628280) Create stream\nI0914 12:28:46.273432 1353 log.go:181] (0xc0005ecf20) (0xc000628280) Stream added, broadcasting: 1\nI0914 12:28:46.278621 1353 log.go:181] (0xc0005ecf20) Reply frame received for 1\nI0914 12:28:46.278664 1353 log.go:181] (0xc0005ecf20) (0xc0006297c0) Create stream\nI0914 12:28:46.278677 1353 log.go:181] (0xc0005ecf20) (0xc0006297c0) Stream added, broadcasting: 3\nI0914 12:28:46.279689 1353 log.go:181] (0xc0005ecf20) Reply frame received for 3\nI0914 12:28:46.279737 1353 log.go:181] (0xc0005ecf20) (0xc000d620a0) Create stream\nI0914 12:28:46.279758 1353 log.go:181] (0xc0005ecf20) (0xc000d620a0) Stream added, broadcasting: 5\nI0914 12:28:46.280927 1353 log.go:181] (0xc0005ecf20) Reply frame received for 5\nI0914 12:28:46.359610 1353 log.go:181] (0xc0005ecf20) Data frame received for 3\nI0914 12:28:46.359642 1353 log.go:181] (0xc0006297c0) (3) Data frame handling\nI0914 12:28:46.359651 1353 log.go:181] (0xc0006297c0) (3) Data frame sent\nI0914 12:28:46.359659 1353 log.go:181] (0xc0005ecf20) Data frame received for 3\nI0914 12:28:46.359665 1353 log.go:181] (0xc0006297c0) (3) Data frame handling\nI0914 12:28:46.359700 1353 log.go:181] (0xc0005ecf20) Data frame received for 5\nI0914 12:28:46.359742 1353 log.go:181] (0xc000d620a0) (5) Data frame handling\nI0914 12:28:46.359777 1353 log.go:181] (0xc000d620a0) (5) Data frame sent\nI0914 12:28:46.359887 1353 log.go:181] (0xc0005ecf20) Data frame received for 5\nI0914 12:28:46.359909 1353 log.go:181] (0xc000d620a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 12:28:46.364219 1353 log.go:181] (0xc0005ecf20) Data frame received for 1\nI0914 12:28:46.364237 1353 log.go:181] (0xc000628280) (1) Data frame handling\nI0914 12:28:46.364251 1353 log.go:181] (0xc000628280) (1) Data frame sent\nI0914 12:28:46.364374 1353 log.go:181] (0xc0005ecf20) (0xc000628280) Stream removed, broadcasting: 1\nI0914 12:28:46.364623 1353 log.go:181] (0xc0005ecf20) Go away received\nI0914 12:28:46.364676 1353 log.go:181] (0xc0005ecf20) (0xc000628280) Stream removed, broadcasting: 1\nI0914 12:28:46.364689 1353 log.go:181] (0xc0005ecf20) (0xc0006297c0) Stream removed, broadcasting: 3\nI0914 12:28:46.364696 1353 log.go:181] (0xc0005ecf20) (0xc000d620a0) Stream removed, broadcasting: 5\n" Sep 14 12:28:46.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:28:46.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:28:46.371: INFO: Found 1 stateful pods, waiting for 3 Sep 14 12:28:56.375: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:28:56.375: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:28:56.375: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 14 12:28:56.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:28:56.616: INFO: stderr: "I0914 12:28:56.515127 1371 log.go:181] (0xc000665810) (0xc0005c0a00) Create stream\nI0914 12:28:56.515176 1371 log.go:181] (0xc000665810) (0xc0005c0a00) Stream added, broadcasting: 1\nI0914 12:28:56.520220 1371 log.go:181] (0xc000665810) Reply frame received for 1\nI0914 12:28:56.520270 1371 log.go:181] (0xc000665810) (0xc0005c0000) Create stream\nI0914 12:28:56.520300 1371 log.go:181] (0xc000665810) (0xc0005c0000) Stream added, broadcasting: 3\nI0914 12:28:56.521304 1371 log.go:181] (0xc000665810) Reply frame received for 3\nI0914 12:28:56.521334 1371 log.go:181] (0xc000665810) (0xc0005c00a0) Create stream\nI0914 12:28:56.521352 1371 log.go:181] (0xc000665810) (0xc0005c00a0) Stream added, broadcasting: 5\nI0914 12:28:56.522353 1371 log.go:181] (0xc000665810) Reply frame received for 5\nI0914 12:28:56.609646 1371 log.go:181] (0xc000665810) Data frame received for 5\nI0914 12:28:56.609695 1371 log.go:181] (0xc000665810) Data frame received for 3\nI0914 12:28:56.609743 1371 log.go:181] (0xc0005c0000) (3) Data frame handling\nI0914 12:28:56.609764 1371 log.go:181] (0xc0005c0000) (3) Data frame sent\nI0914 12:28:56.609777 1371 log.go:181] (0xc000665810) Data frame received for 3\nI0914 12:28:56.609787 1371 log.go:181] (0xc0005c0000) (3) Data frame handling\nI0914 12:28:56.609805 1371 log.go:181] (0xc0005c00a0) (5) Data frame handling\nI0914 12:28:56.609818 1371 log.go:181] (0xc0005c00a0) (5) Data frame sent\nI0914 12:28:56.609829 1371 log.go:181] (0xc000665810) Data frame received for 5\nI0914 12:28:56.609841 1371 log.go:181] (0xc0005c00a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:28:56.611495 1371 log.go:181] (0xc000665810) Data frame received for 1\nI0914 12:28:56.611534 1371 log.go:181] (0xc0005c0a00) (1) Data frame handling\nI0914 12:28:56.611567 1371 log.go:181] (0xc0005c0a00) (1) Data frame sent\nI0914 12:28:56.611605 1371 log.go:181] (0xc000665810) (0xc0005c0a00) Stream removed, broadcasting: 1\nI0914 12:28:56.611653 1371 log.go:181] (0xc000665810) Go away received\nI0914 12:28:56.612049 1371 log.go:181] (0xc000665810) (0xc0005c0a00) Stream removed, broadcasting: 1\nI0914 12:28:56.612072 1371 log.go:181] (0xc000665810) (0xc0005c0000) Stream removed, broadcasting: 3\nI0914 12:28:56.612085 1371 log.go:181] (0xc000665810) (0xc0005c00a0) Stream removed, broadcasting: 5\n" Sep 14 12:28:56.616: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:28:56.616: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:28:56.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:28:56.863: INFO: stderr: "I0914 12:28:56.742635 1389 log.go:181] (0xc000db22c0) (0xc000430aa0) Create stream\nI0914 12:28:56.742678 1389 log.go:181] (0xc000db22c0) (0xc000430aa0) Stream added, broadcasting: 1\nI0914 12:28:56.745843 1389 log.go:181] (0xc000db22c0) Reply frame received for 1\nI0914 12:28:56.745877 1389 log.go:181] (0xc000db22c0) (0xc000a16640) Create stream\nI0914 12:28:56.745886 1389 log.go:181] (0xc000db22c0) (0xc000a16640) Stream added, broadcasting: 3\nI0914 12:28:56.746525 1389 log.go:181] (0xc000db22c0) Reply frame received for 3\nI0914 12:28:56.746549 1389 log.go:181] (0xc000db22c0) (0xc000430460) Create stream\nI0914 12:28:56.746562 1389 log.go:181] (0xc000db22c0) (0xc000430460) Stream added, broadcasting: 5\nI0914 12:28:56.747240 1389 log.go:181] (0xc000db22c0) Reply frame received for 5\nI0914 12:28:56.796368 1389 log.go:181] (0xc000db22c0) Data frame received for 5\nI0914 12:28:56.796388 1389 log.go:181] (0xc000430460) (5) Data frame handling\nI0914 12:28:56.796399 1389 log.go:181] (0xc000430460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:28:56.857395 1389 log.go:181] (0xc000db22c0) Data frame received for 3\nI0914 12:28:56.857425 1389 log.go:181] (0xc000a16640) (3) Data frame handling\nI0914 12:28:56.857439 1389 log.go:181] (0xc000a16640) (3) Data frame sent\nI0914 12:28:56.857460 1389 log.go:181] (0xc000db22c0) Data frame received for 3\nI0914 12:28:56.857474 1389 log.go:181] (0xc000a16640) (3) Data frame handling\nI0914 12:28:56.857679 1389 log.go:181] (0xc000db22c0) Data frame received for 5\nI0914 12:28:56.857703 1389 log.go:181] (0xc000430460) (5) Data frame handling\nI0914 12:28:56.859443 1389 log.go:181] (0xc000db22c0) Data frame received for 1\nI0914 12:28:56.859459 1389 log.go:181] (0xc000430aa0) (1) Data frame handling\nI0914 12:28:56.859467 1389 log.go:181] (0xc000430aa0) (1) Data frame sent\nI0914 12:28:56.859477 1389 log.go:181] (0xc000db22c0) (0xc000430aa0) Stream removed, broadcasting: 1\nI0914 12:28:56.859513 1389 log.go:181] (0xc000db22c0) Go away received\nI0914 12:28:56.859754 1389 log.go:181] (0xc000db22c0) (0xc000430aa0) Stream removed, broadcasting: 1\nI0914 12:28:56.859770 1389 log.go:181] (0xc000db22c0) (0xc000a16640) Stream removed, broadcasting: 3\nI0914 12:28:56.859776 1389 log.go:181] (0xc000db22c0) (0xc000430460) Stream removed, broadcasting: 5\n" Sep 14 12:28:56.863: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:28:56.863: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:28:56.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 12:28:57.116: INFO: stderr: "I0914 12:28:57.002011 1407 log.go:181] (0xc000360fd0) (0xc00036f360) Create stream\nI0914 12:28:57.002068 1407 log.go:181] (0xc000360fd0) (0xc00036f360) Stream added, broadcasting: 1\nI0914 12:28:57.007599 1407 log.go:181] (0xc000360fd0) Reply frame received for 1\nI0914 12:28:57.007648 1407 log.go:181] (0xc000360fd0) (0xc000abe000) Create stream\nI0914 12:28:57.007666 1407 log.go:181] (0xc000360fd0) (0xc000abe000) Stream added, broadcasting: 3\nI0914 12:28:57.008713 1407 log.go:181] (0xc000360fd0) Reply frame received for 3\nI0914 12:28:57.008762 1407 log.go:181] (0xc000360fd0) (0xc000abe0a0) Create stream\nI0914 12:28:57.008782 1407 log.go:181] (0xc000360fd0) (0xc000abe0a0) Stream added, broadcasting: 5\nI0914 12:28:57.009705 1407 log.go:181] (0xc000360fd0) Reply frame received for 5\nI0914 12:28:57.075405 1407 log.go:181] (0xc000360fd0) Data frame received for 5\nI0914 12:28:57.075454 1407 log.go:181] (0xc000abe0a0) (5) Data frame handling\nI0914 12:28:57.075491 1407 log.go:181] (0xc000abe0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 12:28:57.109534 1407 log.go:181] (0xc000360fd0) Data frame received for 3\nI0914 12:28:57.109579 1407 log.go:181] (0xc000abe000) (3) Data frame handling\nI0914 12:28:57.109614 1407 log.go:181] (0xc000abe000) (3) Data frame sent\nI0914 12:28:57.109695 1407 log.go:181] (0xc000360fd0) Data frame received for 3\nI0914 12:28:57.109730 1407 log.go:181] (0xc000abe000) (3) Data frame handling\nI0914 12:28:57.109915 1407 log.go:181] (0xc000360fd0) Data frame received for 5\nI0914 12:28:57.109935 1407 log.go:181] (0xc000abe0a0) (5) Data frame handling\nI0914 12:28:57.111901 1407 log.go:181] (0xc000360fd0) Data frame received for 1\nI0914 12:28:57.111921 1407 log.go:181] (0xc00036f360) (1) Data frame handling\nI0914 12:28:57.111936 1407 log.go:181] (0xc00036f360) (1) Data frame sent\nI0914 12:28:57.111951 1407 log.go:181] (0xc000360fd0) (0xc00036f360) Stream removed, broadcasting: 1\nI0914 12:28:57.111999 1407 log.go:181] (0xc000360fd0) Go away received\nI0914 12:28:57.112352 1407 log.go:181] (0xc000360fd0) (0xc00036f360) Stream removed, broadcasting: 1\nI0914 12:28:57.112375 1407 log.go:181] (0xc000360fd0) (0xc000abe000) Stream removed, broadcasting: 3\nI0914 12:28:57.112383 1407 log.go:181] (0xc000360fd0) (0xc000abe0a0) Stream removed, broadcasting: 5\n" Sep 14 12:28:57.116: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 12:28:57.116: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 12:28:57.116: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:28:57.120: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Sep 14 12:29:07.129: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:29:07.129: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:29:07.129: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 14 12:29:07.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999334s Sep 14 12:29:08.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980786744s Sep 14 12:29:09.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975696584s Sep 14 12:29:10.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970450233s Sep 14 12:29:11.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964352405s Sep 14 12:29:12.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959044927s Sep 14 12:29:13.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953431016s Sep 14 12:29:14.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.947731362s Sep 14 12:29:15.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941684862s Sep 14 12:29:16.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.813141ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2823 Sep 14 12:29:17.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:29:17.427: INFO: stderr: "I0914 12:29:17.361615 1425 log.go:181] (0xc0005c2000) (0xc000cd6000) Create stream\nI0914 12:29:17.361669 1425 log.go:181] (0xc0005c2000) (0xc000cd6000) Stream added, broadcasting: 1\nI0914 12:29:17.363525 1425 log.go:181] (0xc0005c2000) Reply frame received for 1\nI0914 12:29:17.363568 1425 log.go:181] (0xc0005c2000) (0xc000309400) Create stream\nI0914 12:29:17.363624 1425 log.go:181] (0xc0005c2000) (0xc000309400) Stream added, broadcasting: 3\nI0914 12:29:17.364385 1425 log.go:181] (0xc0005c2000) Reply frame received for 3\nI0914 12:29:17.364409 1425 log.go:181] (0xc0005c2000) (0xc000b1ab40) Create stream\nI0914 12:29:17.364418 1425 log.go:181] (0xc0005c2000) (0xc000b1ab40) Stream added, broadcasting: 5\nI0914 12:29:17.365236 1425 log.go:181] (0xc0005c2000) Reply frame received for 5\nI0914 12:29:17.420508 1425 log.go:181] (0xc0005c2000) Data frame received for 3\nI0914 12:29:17.420531 1425 log.go:181] (0xc000309400) (3) Data frame handling\nI0914 12:29:17.420555 1425 log.go:181] (0xc000309400) (3) Data frame sent\nI0914 12:29:17.420582 1425 log.go:181] (0xc0005c2000) Data frame received for 5\nI0914 12:29:17.420614 1425 log.go:181] (0xc000b1ab40) (5) Data frame handling\nI0914 12:29:17.420641 1425 log.go:181] (0xc000b1ab40) (5) Data frame sent\nI0914 12:29:17.420670 1425 log.go:181] (0xc0005c2000) Data frame received for 5\nI0914 12:29:17.420691 1425 log.go:181] (0xc000b1ab40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 12:29:17.420809 1425 log.go:181] (0xc0005c2000) Data frame received for 3\nI0914 12:29:17.420821 1425 log.go:181] (0xc000309400) (3) Data frame handling\nI0914 12:29:17.422541 1425 log.go:181] (0xc0005c2000) Data frame received for 1\nI0914 12:29:17.422565 1425 log.go:181] (0xc000cd6000) (1) Data frame handling\nI0914 12:29:17.422585 1425 log.go:181] (0xc000cd6000) (1) Data frame sent\nI0914 12:29:17.422613 1425 log.go:181] (0xc0005c2000) (0xc000cd6000) Stream removed, broadcasting: 1\nI0914 12:29:17.422753 1425 log.go:181] (0xc0005c2000) Go away received\nI0914 12:29:17.423088 1425 log.go:181] (0xc0005c2000) (0xc000cd6000) Stream removed, broadcasting: 1\nI0914 12:29:17.423112 1425 log.go:181] (0xc0005c2000) (0xc000309400) Stream removed, broadcasting: 3\nI0914 12:29:17.423137 1425 log.go:181] (0xc0005c2000) (0xc000b1ab40) Stream removed, broadcasting: 5\n" Sep 14 12:29:17.427: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:29:17.427: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:29:17.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:29:17.625: INFO: stderr: "I0914 12:29:17.557623 1443 log.go:181] (0xc00063a000) (0xc000bb20a0) Create stream\nI0914 12:29:17.557695 1443 log.go:181] (0xc00063a000) (0xc000bb20a0) Stream added, broadcasting: 1\nI0914 12:29:17.559513 1443 log.go:181] (0xc00063a000) Reply frame received for 1\nI0914 12:29:17.559571 1443 log.go:181] (0xc00063a000) (0xc000c34000) Create stream\nI0914 12:29:17.559583 1443 log.go:181] (0xc00063a000) (0xc000c34000) Stream added, broadcasting: 3\nI0914 12:29:17.560503 1443 log.go:181] (0xc00063a000) Reply frame received for 3\nI0914 12:29:17.560533 1443 log.go:181] (0xc00063a000) (0xc000c340a0) Create stream\nI0914 12:29:17.560543 1443 log.go:181] (0xc00063a000) (0xc000c340a0) Stream added, broadcasting: 5\nI0914 12:29:17.561360 1443 log.go:181] (0xc00063a000) Reply frame received for 5\nI0914 12:29:17.620472 1443 log.go:181] (0xc00063a000) Data frame received for 3\nI0914 12:29:17.620490 1443 log.go:181] (0xc000c34000) (3) Data frame handling\nI0914 12:29:17.620508 1443 log.go:181] (0xc000c34000) (3) Data frame sent\nI0914 12:29:17.620521 1443 log.go:181] (0xc00063a000) Data frame received for 5\nI0914 12:29:17.620527 1443 log.go:181] (0xc000c340a0) (5) Data frame handling\nI0914 12:29:17.620540 1443 log.go:181] (0xc000c340a0) (5) Data frame sent\nI0914 12:29:17.620552 1443 log.go:181] (0xc00063a000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 12:29:17.620559 1443 log.go:181] (0xc000c340a0) (5) Data frame handling\nI0914 12:29:17.620603 1443 log.go:181] (0xc00063a000) Data frame received for 3\nI0914 12:29:17.620626 1443 log.go:181] (0xc000c34000) (3) Data frame handling\nI0914 12:29:17.622008 1443 log.go:181] (0xc00063a000) Data frame received for 1\nI0914 12:29:17.622020 1443 log.go:181] (0xc000bb20a0) (1) Data frame handling\nI0914 12:29:17.622030 1443 log.go:181] (0xc000bb20a0) (1) Data frame sent\nI0914 12:29:17.622097 1443 log.go:181] (0xc00063a000) (0xc000bb20a0) Stream removed, broadcasting: 1\nI0914 12:29:17.622241 1443 log.go:181] (0xc00063a000) Go away received\nI0914 12:29:17.622367 1443 log.go:181] (0xc00063a000) (0xc000bb20a0) Stream removed, broadcasting: 1\nI0914 12:29:17.622383 1443 log.go:181] (0xc00063a000) (0xc000c34000) Stream removed, broadcasting: 3\nI0914 12:29:17.622389 1443 log.go:181] (0xc00063a000) (0xc000c340a0) Stream removed, broadcasting: 5\n" Sep 14 12:29:17.625: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:29:17.625: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:29:17.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2823 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 12:29:17.839: INFO: stderr: "I0914 12:29:17.773813 1462 log.go:181] (0xc00003a0b0) (0xc0007f2460) Create stream\nI0914 12:29:17.773894 1462 log.go:181] (0xc00003a0b0) (0xc0007f2460) Stream added, broadcasting: 1\nI0914 12:29:17.778639 1462 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0914 12:29:17.778748 1462 log.go:181] (0xc00003a0b0) (0xc000bbfae0) Create stream\nI0914 12:29:17.778782 1462 log.go:181] (0xc00003a0b0) (0xc000bbfae0) Stream added, broadcasting: 3\nI0914 12:29:17.780033 1462 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0914 12:29:17.780077 1462 log.go:181] (0xc00003a0b0) (0xc0007f2a00) Create stream\nI0914 12:29:17.780090 1462 log.go:181] (0xc00003a0b0) (0xc0007f2a00) Stream added, broadcasting: 5\nI0914 12:29:17.781074 1462 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0914 12:29:17.832839 1462 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 12:29:17.832867 1462 log.go:181] (0xc0007f2a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 12:29:17.832881 1462 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 12:29:17.832902 1462 log.go:181] (0xc000bbfae0) (3) Data frame handling\nI0914 12:29:17.832913 1462 log.go:181] (0xc000bbfae0) (3) Data frame sent\nI0914 12:29:17.832920 1462 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 12:29:17.832926 1462 log.go:181] (0xc000bbfae0) (3) Data frame handling\nI0914 12:29:17.832948 1462 log.go:181] (0xc0007f2a00) (5) Data frame sent\nI0914 12:29:17.832955 1462 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 12:29:17.832961 1462 log.go:181] (0xc0007f2a00) (5) Data frame handling\nI0914 12:29:17.834161 1462 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0914 12:29:17.834182 1462 log.go:181] (0xc0007f2460) (1) Data frame handling\nI0914 12:29:17.834194 1462 log.go:181] (0xc0007f2460) (1) Data frame sent\nI0914 12:29:17.834208 1462 log.go:181] (0xc00003a0b0) (0xc0007f2460) Stream removed, broadcasting: 1\nI0914 12:29:17.834219 1462 log.go:181] (0xc00003a0b0) Go away received\nI0914 12:29:17.834501 1462 log.go:181] (0xc00003a0b0) (0xc0007f2460) Stream removed, broadcasting: 1\nI0914 12:29:17.834516 1462 log.go:181] (0xc00003a0b0) (0xc000bbfae0) Stream removed, broadcasting: 3\nI0914 12:29:17.834523 1462 log.go:181] (0xc00003a0b0) (0xc0007f2a00) Stream removed, broadcasting: 5\n" Sep 14 12:29:17.839: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 12:29:17.839: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 12:29:17.839: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 12:29:37.890: INFO: Deleting all statefulset in ns statefulset-2823 Sep 14 12:29:37.893: INFO: Scaling statefulset ss to 0 Sep 14 12:29:37.904: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:29:37.906: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:29:37.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2823" for this suite. • [SLOW TEST:82.334 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":110,"skipped":1515,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:29:37.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:29:37.991: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 14 12:29:40.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-630 create -f -' Sep 14 12:29:44.668: INFO: stderr: "" Sep 14 12:29:44.668: INFO: stdout: "e2e-test-crd-publish-openapi-3130-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 14 12:29:44.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-630 delete e2e-test-crd-publish-openapi-3130-crds test-cr' Sep 14 12:29:44.776: INFO: stderr: "" Sep 14 12:29:44.776: INFO: stdout: "e2e-test-crd-publish-openapi-3130-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 14 12:29:44.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-630 apply -f -' Sep 14 12:29:45.080: INFO: stderr: "" Sep 14 12:29:45.080: INFO: stdout: "e2e-test-crd-publish-openapi-3130-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 14 12:29:45.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-630 delete e2e-test-crd-publish-openapi-3130-crds test-cr' Sep 14 12:29:45.195: INFO: stderr: "" Sep 14 12:29:45.195: INFO: stdout: "e2e-test-crd-publish-openapi-3130-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 14 12:29:45.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3130-crds' Sep 14 12:29:45.488: INFO: stderr: "" Sep 14 12:29:45.488: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3130-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:29:48.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-630" for this suite. • [SLOW TEST:10.523 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":111,"skipped":1519,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:29:48.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:29:48.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5" in namespace "projected-3435" to be "Succeeded or Failed" Sep 14 12:29:48.545: INFO: Pod "downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.934741ms Sep 14 12:29:50.550: INFO: Pod "downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032537641s Sep 14 12:29:52.553: INFO: Pod "downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03629138s STEP: Saw pod success Sep 14 12:29:52.553: INFO: Pod "downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5" satisfied condition "Succeeded or Failed" Sep 14 12:29:52.557: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5 container client-container: STEP: delete the pod Sep 14 12:29:52.595: INFO: Waiting for pod downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5 to disappear Sep 14 12:29:52.623: INFO: Pod downwardapi-volume-07f28b22-edd5-4568-b785-3f21ee4fabf5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:29:52.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3435" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1522,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:29:52.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:29:52.722: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:29:56.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1638" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1531,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:29:56.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:30:00.999: INFO: Waiting up to 5m0s for pod "client-envvars-6798829e-3608-4632-ba46-c7fb7854586d" in namespace "pods-1365" to be "Succeeded or Failed" Sep 14 12:30:01.008: INFO: Pod "client-envvars-6798829e-3608-4632-ba46-c7fb7854586d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.782792ms Sep 14 12:30:03.013: INFO: Pod "client-envvars-6798829e-3608-4632-ba46-c7fb7854586d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014185173s Sep 14 12:30:05.018: INFO: Pod "client-envvars-6798829e-3608-4632-ba46-c7fb7854586d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018890297s STEP: Saw pod success Sep 14 12:30:05.018: INFO: Pod "client-envvars-6798829e-3608-4632-ba46-c7fb7854586d" satisfied condition "Succeeded or Failed" Sep 14 12:30:05.021: INFO: Trying to get logs from node latest-worker2 pod client-envvars-6798829e-3608-4632-ba46-c7fb7854586d container env3cont: STEP: delete the pod Sep 14 12:30:05.077: INFO: Waiting for pod client-envvars-6798829e-3608-4632-ba46-c7fb7854586d to disappear Sep 14 12:30:05.084: INFO: Pod client-envvars-6798829e-3608-4632-ba46-c7fb7854586d no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:05.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1365" for this suite. • [SLOW TEST:8.223 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:05.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:05.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1899" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":115,"skipped":1592,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:05.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5024.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5024.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 12:30:11.372: INFO: DNS probes using dns-5024/dns-test-5d36f326-0d36-430e-8f36-0bf27898887a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:11.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5024" for this suite. • [SLOW TEST:6.251 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":116,"skipped":1594,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:11.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 14 12:30:11.911: INFO: created test-event-1 Sep 14 12:30:11.917: INFO: created test-event-2 Sep 14 12:30:11.923: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 14 12:30:11.929: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 14 12:30:11.949: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:11.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4654" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":117,"skipped":1616,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:11.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-8cde3345-4917-46c8-9d76-afb7f920923e STEP: Creating a pod to test consume secrets Sep 14 12:30:12.096: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81" in namespace "projected-1947" to be "Succeeded or Failed" Sep 14 12:30:12.102: INFO: Pod "pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447473ms Sep 14 12:30:14.107: INFO: Pod "pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011616005s Sep 14 12:30:16.151: INFO: Pod "pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054939021s STEP: Saw pod success Sep 14 12:30:16.151: INFO: Pod "pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81" satisfied condition "Succeeded or Failed" Sep 14 12:30:16.154: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81 container projected-secret-volume-test: STEP: delete the pod Sep 14 12:30:16.278: INFO: Waiting for pod pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81 to disappear Sep 14 12:30:16.282: INFO: Pod pod-projected-secrets-211120b0-78d0-4278-8cb7-6b8c0dd21d81 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:16.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1947" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1637,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:16.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 14 12:30:16.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f -' Sep 14 12:30:16.695: INFO: stderr: "" Sep 14 12:30:16.695: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 14 12:30:16.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config diff -f -' Sep 14 12:30:17.257: INFO: rc: 1 Sep 14 12:30:17.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete -f -' Sep 14 12:30:17.377: INFO: stderr: "" Sep 14 12:30:17.377: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:30:17.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3491" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":119,"skipped":1649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:30:17.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-c437490d-cf4a-4d41-be7b-c984136a6275 STEP: Creating secret with name s-test-opt-upd-ca43c0fa-6c5e-4d37-8b90-670c2c27ac96 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c437490d-cf4a-4d41-be7b-c984136a6275 STEP: Updating secret s-test-opt-upd-ca43c0fa-6c5e-4d37-8b90-670c2c27ac96 STEP: Creating secret with name s-test-opt-create-b51a3187-e20d-4e2b-918f-fae9af46bff8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:31:52.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7054" for this suite. • [SLOW TEST:94.679 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:31:52.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:33:52.174: INFO: Deleting pod "var-expansion-cd946504-b5df-4eb1-ace5-bd88a21e6f1f" in namespace "var-expansion-2544" Sep 14 12:33:52.181: INFO: Wait up to 5m0s for pod "var-expansion-cd946504-b5df-4eb1-ace5-bd88a21e6f1f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:33:56.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2544" for this suite. • [SLOW TEST:124.151 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":121,"skipped":1738,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:33:56.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8149 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8149 I0914 12:33:56.372716 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8149, replica count: 2 I0914 12:33:59.423145 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:34:02.423475 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 12:34:02.423: INFO: Creating new exec pod Sep 14 12:34:07.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-8149 execpod64mjq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 14 12:34:07.692: INFO: stderr: "I0914 12:34:07.619674 1626 log.go:181] (0xc0001b2370) (0xc000572000) Create stream\nI0914 12:34:07.619752 1626 log.go:181] (0xc0001b2370) (0xc000572000) Stream added, broadcasting: 1\nI0914 12:34:07.622279 1626 log.go:181] (0xc0001b2370) Reply frame received for 1\nI0914 12:34:07.622314 1626 log.go:181] (0xc0001b2370) (0xc000c8a280) Create stream\nI0914 12:34:07.622326 1626 log.go:181] (0xc0001b2370) (0xc000c8a280) Stream added, broadcasting: 3\nI0914 12:34:07.624550 1626 log.go:181] (0xc0001b2370) Reply frame received for 3\nI0914 12:34:07.624595 1626 log.go:181] (0xc0001b2370) (0xc000ab0000) Create stream\nI0914 12:34:07.624614 1626 log.go:181] (0xc0001b2370) (0xc000ab0000) Stream added, broadcasting: 5\nI0914 12:34:07.625522 1626 log.go:181] (0xc0001b2370) Reply frame received for 5\nI0914 12:34:07.685769 1626 log.go:181] (0xc0001b2370) Data frame received for 3\nI0914 12:34:07.685799 1626 log.go:181] (0xc000c8a280) (3) Data frame handling\nI0914 12:34:07.685822 1626 log.go:181] (0xc0001b2370) Data frame received for 5\nI0914 12:34:07.685831 1626 log.go:181] (0xc000ab0000) (5) Data frame handling\nI0914 12:34:07.685843 1626 log.go:181] (0xc000ab0000) (5) Data frame sent\nI0914 12:34:07.685854 1626 log.go:181] (0xc0001b2370) Data frame received for 5\nI0914 12:34:07.685864 1626 log.go:181] (0xc000ab0000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0914 12:34:07.687808 1626 log.go:181] (0xc0001b2370) Data frame received for 1\nI0914 12:34:07.687965 1626 log.go:181] (0xc000572000) (1) Data frame handling\nI0914 12:34:07.688040 1626 log.go:181] (0xc000572000) (1) Data frame sent\nI0914 12:34:07.688085 1626 log.go:181] (0xc0001b2370) (0xc000572000) Stream removed, broadcasting: 1\nI0914 12:34:07.688278 1626 log.go:181] (0xc0001b2370) Go away received\nI0914 12:34:07.688556 1626 log.go:181] (0xc0001b2370) (0xc000572000) Stream removed, broadcasting: 1\nI0914 12:34:07.688579 1626 log.go:181] (0xc0001b2370) (0xc000c8a280) Stream removed, broadcasting: 3\nI0914 12:34:07.688590 1626 log.go:181] (0xc0001b2370) (0xc000ab0000) Stream removed, broadcasting: 5\n" Sep 14 12:34:07.692: INFO: stdout: "" Sep 14 12:34:07.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-8149 execpod64mjq -- /bin/sh -x -c nc -zv -t -w 2 10.105.190.125 80' Sep 14 12:34:07.908: INFO: stderr: "I0914 12:34:07.830437 1644 log.go:181] (0xc000bcafd0) (0xc0004ecb40) Create stream\nI0914 12:34:07.830483 1644 log.go:181] (0xc000bcafd0) (0xc0004ecb40) Stream added, broadcasting: 1\nI0914 12:34:07.836421 1644 log.go:181] (0xc000bcafd0) Reply frame received for 1\nI0914 12:34:07.836487 1644 log.go:181] (0xc000bcafd0) (0xc00093c280) Create stream\nI0914 12:34:07.836512 1644 log.go:181] (0xc000bcafd0) (0xc00093c280) Stream added, broadcasting: 3\nI0914 12:34:07.837408 1644 log.go:181] (0xc000bcafd0) Reply frame received for 3\nI0914 12:34:07.837443 1644 log.go:181] (0xc000bcafd0) (0xc00093c500) Create stream\nI0914 12:34:07.837459 1644 log.go:181] (0xc000bcafd0) (0xc00093c500) Stream added, broadcasting: 5\nI0914 12:34:07.838230 1644 log.go:181] (0xc000bcafd0) Reply frame received for 5\nI0914 12:34:07.903156 1644 log.go:181] (0xc000bcafd0) Data frame received for 5\nI0914 12:34:07.903253 1644 log.go:181] (0xc00093c500) (5) Data frame handling\nI0914 12:34:07.903288 1644 log.go:181] (0xc00093c500) (5) Data frame sent\nI0914 12:34:07.903304 1644 log.go:181] (0xc000bcafd0) Data frame received for 5\nI0914 12:34:07.903315 1644 log.go:181] (0xc00093c500) (5) Data frame handling\nI0914 12:34:07.903330 1644 log.go:181] (0xc000bcafd0) Data frame received for 3\nI0914 12:34:07.903341 1644 log.go:181] (0xc00093c280) (3) Data frame handling\n+ nc -zv -t -w 2 10.105.190.125 80\nConnection to 10.105.190.125 80 port [tcp/http] succeeded!\nI0914 12:34:07.904717 1644 log.go:181] (0xc000bcafd0) Data frame received for 1\nI0914 12:34:07.904748 1644 log.go:181] (0xc0004ecb40) (1) Data frame handling\nI0914 12:34:07.904770 1644 log.go:181] (0xc0004ecb40) (1) Data frame sent\nI0914 12:34:07.904792 1644 log.go:181] (0xc000bcafd0) (0xc0004ecb40) Stream removed, broadcasting: 1\nI0914 12:34:07.904849 1644 log.go:181] (0xc000bcafd0) Go away received\nI0914 12:34:07.905129 1644 log.go:181] (0xc000bcafd0) (0xc0004ecb40) Stream removed, broadcasting: 1\nI0914 12:34:07.905150 1644 log.go:181] (0xc000bcafd0) (0xc00093c280) Stream removed, broadcasting: 3\nI0914 12:34:07.905166 1644 log.go:181] (0xc000bcafd0) (0xc00093c500) Stream removed, broadcasting: 5\n" Sep 14 12:34:07.908: INFO: stdout: "" Sep 14 12:34:07.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-8149 execpod64mjq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30191' Sep 14 12:34:08.118: INFO: stderr: "I0914 12:34:08.047333 1662 log.go:181] (0xc000142370) (0xc0001a88c0) Create stream\nI0914 12:34:08.047390 1662 log.go:181] (0xc000142370) (0xc0001a88c0) Stream added, broadcasting: 1\nI0914 12:34:08.049822 1662 log.go:181] (0xc000142370) Reply frame received for 1\nI0914 12:34:08.049884 1662 log.go:181] (0xc000142370) (0xc0003dc500) Create stream\nI0914 12:34:08.049901 1662 log.go:181] (0xc000142370) (0xc0003dc500) Stream added, broadcasting: 3\nI0914 12:34:08.050937 1662 log.go:181] (0xc000142370) Reply frame received for 3\nI0914 12:34:08.050981 1662 log.go:181] (0xc000142370) (0xc000b141e0) Create stream\nI0914 12:34:08.051004 1662 log.go:181] (0xc000142370) (0xc000b141e0) Stream added, broadcasting: 5\nI0914 12:34:08.052442 1662 log.go:181] (0xc000142370) Reply frame received for 5\nI0914 12:34:08.111196 1662 log.go:181] (0xc000142370) Data frame received for 5\nI0914 12:34:08.111234 1662 log.go:181] (0xc000b141e0) (5) Data frame handling\nI0914 12:34:08.111253 1662 log.go:181] (0xc000b141e0) (5) Data frame sent\nI0914 12:34:08.111265 1662 log.go:181] (0xc000142370) Data frame received for 5\nI0914 12:34:08.111273 1662 log.go:181] (0xc000b141e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30191\nConnection to 172.18.0.15 30191 port [tcp/30191] succeeded!\nI0914 12:34:08.111291 1662 log.go:181] (0xc000b141e0) (5) Data frame sent\nI0914 12:34:08.111772 1662 log.go:181] (0xc000142370) Data frame received for 3\nI0914 12:34:08.111797 1662 log.go:181] (0xc0003dc500) (3) Data frame handling\nI0914 12:34:08.111833 1662 log.go:181] (0xc000142370) Data frame received for 5\nI0914 12:34:08.111862 1662 log.go:181] (0xc000b141e0) (5) Data frame handling\nI0914 12:34:08.114948 1662 log.go:181] (0xc000142370) Data frame received for 1\nI0914 12:34:08.114972 1662 log.go:181] (0xc0001a88c0) (1) Data frame handling\nI0914 12:34:08.114985 1662 log.go:181] (0xc0001a88c0) (1) Data frame sent\nI0914 12:34:08.115000 1662 log.go:181] (0xc000142370) (0xc0001a88c0) Stream removed, broadcasting: 1\nI0914 12:34:08.115015 1662 log.go:181] (0xc000142370) Go away received\nI0914 12:34:08.115434 1662 log.go:181] (0xc000142370) (0xc0001a88c0) Stream removed, broadcasting: 1\nI0914 12:34:08.115462 1662 log.go:181] (0xc000142370) (0xc0003dc500) Stream removed, broadcasting: 3\nI0914 12:34:08.115472 1662 log.go:181] (0xc000142370) (0xc000b141e0) Stream removed, broadcasting: 5\n" Sep 14 12:34:08.118: INFO: stdout: "" Sep 14 12:34:08.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-8149 execpod64mjq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30191' Sep 14 12:34:08.308: INFO: stderr: "I0914 12:34:08.235826 1679 log.go:181] (0xc000dcaf20) (0xc00039db80) Create stream\nI0914 12:34:08.235874 1679 log.go:181] (0xc000dcaf20) (0xc00039db80) Stream added, broadcasting: 1\nI0914 12:34:08.242001 1679 log.go:181] (0xc000dcaf20) Reply frame received for 1\nI0914 12:34:08.242058 1679 log.go:181] (0xc000dcaf20) (0xc00039c640) Create stream\nI0914 12:34:08.242086 1679 log.go:181] (0xc000dcaf20) (0xc00039c640) Stream added, broadcasting: 3\nI0914 12:34:08.243045 1679 log.go:181] (0xc000dcaf20) Reply frame received for 3\nI0914 12:34:08.243070 1679 log.go:181] (0xc000dcaf20) (0xc000a1e0a0) Create stream\nI0914 12:34:08.243079 1679 log.go:181] (0xc000dcaf20) (0xc000a1e0a0) Stream added, broadcasting: 5\nI0914 12:34:08.243925 1679 log.go:181] (0xc000dcaf20) Reply frame received for 5\nI0914 12:34:08.301742 1679 log.go:181] (0xc000dcaf20) Data frame received for 3\nI0914 12:34:08.301789 1679 log.go:181] (0xc00039c640) (3) Data frame handling\nI0914 12:34:08.301818 1679 log.go:181] (0xc000dcaf20) Data frame received for 5\nI0914 12:34:08.301838 1679 log.go:181] (0xc000a1e0a0) (5) Data frame handling\nI0914 12:34:08.301861 1679 log.go:181] (0xc000a1e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 30191\nConnection to 172.18.0.16 30191 port [tcp/30191] succeeded!\nI0914 12:34:08.301972 1679 log.go:181] (0xc000dcaf20) Data frame received for 5\nI0914 12:34:08.302007 1679 log.go:181] (0xc000a1e0a0) (5) Data frame handling\nI0914 12:34:08.303864 1679 log.go:181] (0xc000dcaf20) Data frame received for 1\nI0914 12:34:08.303889 1679 log.go:181] (0xc00039db80) (1) Data frame handling\nI0914 12:34:08.303909 1679 log.go:181] (0xc00039db80) (1) Data frame sent\nI0914 12:34:08.303929 1679 log.go:181] (0xc000dcaf20) (0xc00039db80) Stream removed, broadcasting: 1\nI0914 12:34:08.304324 1679 log.go:181] (0xc000dcaf20) Go away received\nI0914 12:34:08.304413 1679 log.go:181] (0xc000dcaf20) (0xc00039db80) Stream removed, broadcasting: 1\nI0914 12:34:08.304429 1679 log.go:181] (0xc000dcaf20) (0xc00039c640) Stream removed, broadcasting: 3\nI0914 12:34:08.304436 1679 log.go:181] (0xc000dcaf20) (0xc000a1e0a0) Stream removed, broadcasting: 5\n" Sep 14 12:34:08.308: INFO: stdout: "" Sep 14 12:34:08.308: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:34:08.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8149" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.182 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":122,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:34:08.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 14 12:34:08.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:08.586: INFO: Number of nodes with available pods: 0 Sep 14 12:34:08.586: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:09.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:09.596: INFO: Number of nodes with available pods: 0 Sep 14 12:34:09.596: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:10.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:10.636: INFO: Number of nodes with available pods: 0 Sep 14 12:34:10.636: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:11.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:11.678: INFO: Number of nodes with available pods: 0 Sep 14 12:34:11.678: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:12.592: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:12.597: INFO: Number of nodes with available pods: 1 Sep 14 12:34:12.597: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:13.590: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:13.593: INFO: Number of nodes with available pods: 2 Sep 14 12:34:13.593: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 14 12:34:13.622: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:13.625: INFO: Number of nodes with available pods: 1 Sep 14 12:34:13.625: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:14.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:14.635: INFO: Number of nodes with available pods: 1 Sep 14 12:34:14.635: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:15.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:15.633: INFO: Number of nodes with available pods: 1 Sep 14 12:34:15.633: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:16.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:16.634: INFO: Number of nodes with available pods: 1 Sep 14 12:34:16.634: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:17.648: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:17.651: INFO: Number of nodes with available pods: 1 Sep 14 12:34:17.651: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:18.855: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:19.375: INFO: Number of nodes with available pods: 1 Sep 14 12:34:19.375: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:19.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:19.636: INFO: Number of nodes with available pods: 1 Sep 14 12:34:19.636: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:20.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:20.634: INFO: Number of nodes with available pods: 1 Sep 14 12:34:20.635: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:21.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:21.662: INFO: Number of nodes with available pods: 1 Sep 14 12:34:21.662: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:22.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:22.635: INFO: Number of nodes with available pods: 1 Sep 14 12:34:22.635: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:23.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:23.635: INFO: Number of nodes with available pods: 1 Sep 14 12:34:23.635: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:24.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:24.636: INFO: Number of nodes with available pods: 1 Sep 14 12:34:24.636: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:25.644: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:25.648: INFO: Number of nodes with available pods: 1 Sep 14 12:34:25.648: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:26.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:26.636: INFO: Number of nodes with available pods: 1 Sep 14 12:34:26.636: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:27.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:27.733: INFO: Number of nodes with available pods: 1 Sep 14 12:34:27.733: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:28.631: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:28.635: INFO: Number of nodes with available pods: 1 Sep 14 12:34:28.635: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:34:29.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:34:29.633: INFO: Number of nodes with available pods: 2 Sep 14 12:34:29.633: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-582, will wait for the garbage collector to delete the pods Sep 14 12:34:29.695: INFO: Deleting DaemonSet.extensions daemon-set took: 6.472134ms Sep 14 12:34:30.095: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.128588ms Sep 14 12:34:35.599: INFO: Number of nodes with available pods: 0 Sep 14 12:34:35.599: INFO: Number of running nodes: 0, number of available pods: 0 Sep 14 12:34:35.602: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-582/daemonsets","resourceVersion":"267115"},"items":null} Sep 14 12:34:35.604: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-582/pods","resourceVersion":"267115"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:34:35.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-582" for this suite. • [SLOW TEST:27.224 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":123,"skipped":1783,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:34:35.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:34:36.336: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:34:38.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683676, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683676, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683676, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683676, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:34:41.429: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 14 12:34:45.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config attach --namespace=webhook-7931 to-be-attached-pod -i -c=container1' Sep 14 12:34:45.646: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:34:45.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7931" for this suite. STEP: Destroying namespace "webhook-7931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":124,"skipped":1785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:34:45.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 14 12:34:45.823: INFO: Waiting up to 5m0s for pod "pod-db28cddf-458b-4513-82a7-2528fce29375" in namespace "emptydir-4957" to be "Succeeded or Failed" Sep 14 12:34:45.827: INFO: Pod "pod-db28cddf-458b-4513-82a7-2528fce29375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339456ms Sep 14 12:34:47.833: INFO: Pod "pod-db28cddf-458b-4513-82a7-2528fce29375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009572609s Sep 14 12:34:49.838: INFO: Pod "pod-db28cddf-458b-4513-82a7-2528fce29375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014863434s STEP: Saw pod success Sep 14 12:34:49.838: INFO: Pod "pod-db28cddf-458b-4513-82a7-2528fce29375" satisfied condition "Succeeded or Failed" Sep 14 12:34:49.841: INFO: Trying to get logs from node latest-worker2 pod pod-db28cddf-458b-4513-82a7-2528fce29375 container test-container: STEP: delete the pod Sep 14 12:34:49.869: INFO: Waiting for pod pod-db28cddf-458b-4513-82a7-2528fce29375 to disappear Sep 14 12:34:49.874: INFO: Pod pod-db28cddf-458b-4513-82a7-2528fce29375 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:34:49.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4957" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":1838,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:34:49.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:34:49.969: INFO: Waiting up to 5m0s for pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9" in namespace "downward-api-2118" to be "Succeeded or Failed" Sep 14 12:34:49.977: INFO: Pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.959202ms Sep 14 12:34:52.075: INFO: Pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105657837s Sep 14 12:34:54.080: INFO: Pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9": Phase="Running", Reason="", readiness=true. Elapsed: 4.110468099s Sep 14 12:34:56.084: INFO: Pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114956063s STEP: Saw pod success Sep 14 12:34:56.084: INFO: Pod "downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9" satisfied condition "Succeeded or Failed" Sep 14 12:34:56.087: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9 container client-container: STEP: delete the pod Sep 14 12:34:56.122: INFO: Waiting for pod downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9 to disappear Sep 14 12:34:56.129: INFO: Pod downwardapi-volume-384b97f7-d3af-4131-a41b-d05f47336cb9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:34:56.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2118" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":126,"skipped":1860,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:34:56.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 14 12:35:02.248: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9940 PodName:pod-sharedvolume-67d4e599-145d-44b6-90c2-710eb610015f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:35:02.248: INFO: >>> kubeConfig: /root/.kube/config I0914 12:35:02.312280 7 log.go:181] (0xc000455340) (0xc000ffb400) Create stream I0914 12:35:02.312322 7 log.go:181] (0xc000455340) (0xc000ffb400) Stream added, broadcasting: 1 I0914 12:35:02.315237 7 log.go:181] (0xc000455340) Reply frame received for 1 I0914 12:35:02.315268 7 log.go:181] (0xc000455340) (0xc0001b3540) Create stream I0914 12:35:02.315279 7 log.go:181] (0xc000455340) (0xc0001b3540) Stream added, broadcasting: 3 I0914 12:35:02.316296 7 log.go:181] (0xc000455340) Reply frame received for 3 I0914 12:35:02.316338 7 log.go:181] (0xc000455340) (0xc00176c320) Create stream I0914 12:35:02.316346 7 log.go:181] (0xc000455340) (0xc00176c320) Stream added, broadcasting: 5 I0914 12:35:02.317172 7 log.go:181] (0xc000455340) Reply frame received for 5 I0914 12:35:02.400560 7 log.go:181] (0xc000455340) Data frame received for 3 I0914 12:35:02.400631 7 log.go:181] (0xc0001b3540) (3) Data frame handling I0914 12:35:02.400672 7 log.go:181] (0xc0001b3540) (3) Data frame sent I0914 12:35:02.400725 7 log.go:181] (0xc000455340) Data frame received for 5 I0914 12:35:02.400789 7 log.go:181] (0xc00176c320) (5) Data frame handling I0914 12:35:02.400832 7 log.go:181] (0xc000455340) Data frame received for 3 I0914 12:35:02.400858 7 log.go:181] (0xc0001b3540) (3) Data frame handling I0914 12:35:02.402437 7 log.go:181] (0xc000455340) Data frame received for 1 I0914 12:35:02.402474 7 log.go:181] (0xc000ffb400) (1) Data frame handling I0914 12:35:02.402496 7 log.go:181] (0xc000ffb400) (1) Data frame sent I0914 12:35:02.402508 7 log.go:181] (0xc000455340) (0xc000ffb400) Stream removed, broadcasting: 1 I0914 12:35:02.402608 7 log.go:181] (0xc000455340) (0xc000ffb400) Stream removed, broadcasting: 1 I0914 12:35:02.402637 7 log.go:181] (0xc000455340) Go away received I0914 12:35:02.402668 7 log.go:181] (0xc000455340) (0xc0001b3540) Stream removed, broadcasting: 3 I0914 12:35:02.402683 7 log.go:181] (0xc000455340) (0xc00176c320) Stream removed, broadcasting: 5 Sep 14 12:35:02.402: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:02.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9940" for this suite. • [SLOW TEST:6.275 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":127,"skipped":1870,"failed":0} SSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:02.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:02.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5205" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":128,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:02.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:35:02.707: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 14 12:35:04.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9099 create -f -' Sep 14 12:35:08.170: INFO: stderr: "" Sep 14 12:35:08.170: INFO: stdout: "e2e-test-crd-publish-openapi-6437-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 14 12:35:08.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9099 delete e2e-test-crd-publish-openapi-6437-crds test-cr' Sep 14 12:35:08.285: INFO: stderr: "" Sep 14 12:35:08.285: INFO: stdout: "e2e-test-crd-publish-openapi-6437-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 14 12:35:08.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9099 apply -f -' Sep 14 12:35:08.596: INFO: stderr: "" Sep 14 12:35:08.596: INFO: stdout: "e2e-test-crd-publish-openapi-6437-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 14 12:35:08.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9099 delete e2e-test-crd-publish-openapi-6437-crds test-cr' Sep 14 12:35:08.744: INFO: stderr: "" Sep 14 12:35:08.744: INFO: stdout: "e2e-test-crd-publish-openapi-6437-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 14 12:35:08.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6437-crds' Sep 14 12:35:10.068: INFO: stderr: "" Sep 14 12:35:10.068: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6437-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:13.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9099" for this suite. • [SLOW TEST:10.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":129,"skipped":1967,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:13.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:35:13.583: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40" in namespace "projected-1932" to be "Succeeded or Failed" Sep 14 12:35:13.698: INFO: Pod "downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40": Phase="Pending", Reason="", readiness=false. Elapsed: 115.357612ms Sep 14 12:35:15.704: INFO: Pod "downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121212804s Sep 14 12:35:17.709: INFO: Pod "downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126406491s STEP: Saw pod success Sep 14 12:35:17.709: INFO: Pod "downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40" satisfied condition "Succeeded or Failed" Sep 14 12:35:17.712: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40 container client-container: STEP: delete the pod Sep 14 12:35:17.855: INFO: Waiting for pod downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40 to disappear Sep 14 12:35:17.864: INFO: Pod downwardapi-volume-d50235a3-c913-4846-a54f-ed5601c89b40 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:17.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1932" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":1974,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:17.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0d10e0ee-0de9-42fb-912c-d7c5a6071337 STEP: Creating a pod to test consume configMaps Sep 14 12:35:17.934: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde" in namespace "projected-5643" to be "Succeeded or Failed" Sep 14 12:35:17.950: INFO: Pod "pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde": Phase="Pending", Reason="", readiness=false. Elapsed: 16.11886ms Sep 14 12:35:19.956: INFO: Pod "pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021354681s Sep 14 12:35:21.961: INFO: Pod "pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026661124s STEP: Saw pod success Sep 14 12:35:21.961: INFO: Pod "pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde" satisfied condition "Succeeded or Failed" Sep 14 12:35:21.964: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde container projected-configmap-volume-test: STEP: delete the pod Sep 14 12:35:22.023: INFO: Waiting for pod pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde to disappear Sep 14 12:35:22.078: INFO: Pod pod-projected-configmaps-e2fe683d-ba32-4ee5-9581-bfc8a4552dde no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:22.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5643" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":1985,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:22.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-137.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-137.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-137.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-137.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 12:35:30.273: INFO: DNS probes using dns-137/dns-test-0a77b429-6bfb-404b-a07b-115580e632c3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:30.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-137" for this suite. • [SLOW TEST:8.694 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":132,"skipped":1992,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:30.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:35:30.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7" in namespace "projected-8687" to be "Succeeded or Failed" Sep 14 12:35:30.901: INFO: Pod "downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.951257ms Sep 14 12:35:33.022: INFO: Pod "downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133182656s Sep 14 12:35:35.026: INFO: Pod "downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136899872s STEP: Saw pod success Sep 14 12:35:35.026: INFO: Pod "downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7" satisfied condition "Succeeded or Failed" Sep 14 12:35:35.029: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7 container client-container: STEP: delete the pod Sep 14 12:35:35.088: INFO: Waiting for pod downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7 to disappear Sep 14 12:35:35.098: INFO: Pod downwardapi-volume-74fd3803-0e05-4c43-9dfd-97646ad64af7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:35.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8687" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:35.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 14 12:35:35.212: INFO: Waiting up to 5m0s for pod "pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a" in namespace "emptydir-8763" to be "Succeeded or Failed" Sep 14 12:35:35.218: INFO: Pod "pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.775934ms Sep 14 12:35:37.227: INFO: Pod "pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014887905s Sep 14 12:35:39.245: INFO: Pod "pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033112528s STEP: Saw pod success Sep 14 12:35:39.245: INFO: Pod "pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a" satisfied condition "Succeeded or Failed" Sep 14 12:35:39.248: INFO: Trying to get logs from node latest-worker2 pod pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a container test-container: STEP: delete the pod Sep 14 12:35:39.264: INFO: Waiting for pod pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a to disappear Sep 14 12:35:39.268: INFO: Pod pod-7e9e12a4-3ad9-40c8-84a2-9e542a4b414a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:39.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8763" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":2025,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:39.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:35:39.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f" in namespace "downward-api-8505" to be "Succeeded or Failed" Sep 14 12:35:39.401: INFO: Pod "downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821728ms Sep 14 12:35:41.519: INFO: Pod "downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122109374s Sep 14 12:35:43.523: INFO: Pod "downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126682296s STEP: Saw pod success Sep 14 12:35:43.523: INFO: Pod "downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f" satisfied condition "Succeeded or Failed" Sep 14 12:35:43.526: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f container client-container: STEP: delete the pod Sep 14 12:35:43.610: INFO: Waiting for pod downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f to disappear Sep 14 12:35:43.616: INFO: Pod downwardapi-volume-f3b88a4b-cc1c-4b8f-9d17-222e700f600f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:35:43.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8505" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2037,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:35:43.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:37:43.772: INFO: Deleting pod "var-expansion-39c394e2-986e-454e-b00f-ba4fcd8634d6" in namespace "var-expansion-9198" Sep 14 12:37:43.777: INFO: Wait up to 5m0s for pod "var-expansion-39c394e2-986e-454e-b00f-ba4fcd8634d6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:37:53.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9198" for this suite. • [SLOW TEST:130.214 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":136,"skipped":2044,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:37:53.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 14 12:37:57.928: INFO: Pod pod-hostip-192df975-d27f-4455-a929-b447157c38eb has hostIP: 172.18.0.16 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:37:57.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-309" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:37:57.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7967 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7967 STEP: Creating statefulset with conflicting port in namespace statefulset-7967 STEP: Waiting until pod test-pod will start running in namespace statefulset-7967 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7967 Sep 14 12:38:04.127: INFO: Observed stateful pod in namespace: statefulset-7967, name: ss-0, uid: 1708e312-1789-4dd8-bc5a-5b28d1476126, status phase: Pending. Waiting for statefulset controller to delete. Sep 14 12:38:04.257: INFO: Observed stateful pod in namespace: statefulset-7967, name: ss-0, uid: 1708e312-1789-4dd8-bc5a-5b28d1476126, status phase: Failed. Waiting for statefulset controller to delete. Sep 14 12:38:04.294: INFO: Observed stateful pod in namespace: statefulset-7967, name: ss-0, uid: 1708e312-1789-4dd8-bc5a-5b28d1476126, status phase: Failed. Waiting for statefulset controller to delete. Sep 14 12:38:04.332: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7967 STEP: Removing pod with conflicting port in namespace statefulset-7967 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7967 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 12:38:08.458: INFO: Deleting all statefulset in ns statefulset-7967 Sep 14 12:38:08.462: INFO: Scaling statefulset ss to 0 Sep 14 12:38:18.543: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:38:18.546: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:38:18.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7967" for this suite. • [SLOW TEST:20.631 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":138,"skipped":2116,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:38:18.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 14 12:38:25.162: INFO: Successfully updated pod "adopt-release-d5vm5" STEP: Checking that the Job readopts the Pod Sep 14 12:38:25.162: INFO: Waiting up to 15m0s for pod "adopt-release-d5vm5" in namespace "job-1543" to be "adopted" Sep 14 12:38:25.167: INFO: Pod "adopt-release-d5vm5": Phase="Running", Reason="", readiness=true. Elapsed: 5.256444ms Sep 14 12:38:27.173: INFO: Pod "adopt-release-d5vm5": Phase="Running", Reason="", readiness=true. Elapsed: 2.010505253s Sep 14 12:38:27.173: INFO: Pod "adopt-release-d5vm5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 14 12:38:27.685: INFO: Successfully updated pod "adopt-release-d5vm5" STEP: Checking that the Job releases the Pod Sep 14 12:38:27.685: INFO: Waiting up to 15m0s for pod "adopt-release-d5vm5" in namespace "job-1543" to be "released" Sep 14 12:38:27.711: INFO: Pod "adopt-release-d5vm5": Phase="Running", Reason="", readiness=true. Elapsed: 25.079817ms Sep 14 12:38:29.735: INFO: Pod "adopt-release-d5vm5": Phase="Running", Reason="", readiness=true. Elapsed: 2.049915479s Sep 14 12:38:29.735: INFO: Pod "adopt-release-d5vm5" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:38:29.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1543" for this suite. • [SLOW TEST:11.177 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":139,"skipped":2116,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:38:29.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:38:29.810: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:38:31.814: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:38:33.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:35.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:37.844: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:39.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:42.366: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:43.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:45.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:47.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:49.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:51.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = false) Sep 14 12:38:53.815: INFO: The status of Pod test-webserver-e9591fbc-591f-4f03-9675-54de0724bd8f is Running (Ready = true) Sep 14 12:38:53.818: INFO: Container started at 2020-09-14 12:38:32 +0000 UTC, pod became ready at 2020-09-14 12:38:53 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:38:53.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6521" for this suite. • [SLOW TEST:24.082 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2126,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:38:53.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 14 12:38:53.883: INFO: Waiting up to 5m0s for pod "downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07" in namespace "downward-api-4440" to be "Succeeded or Failed" Sep 14 12:38:53.898: INFO: Pod "downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07": Phase="Pending", Reason="", readiness=false. Elapsed: 14.531094ms Sep 14 12:38:55.902: INFO: Pod "downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018870221s Sep 14 12:38:57.907: INFO: Pod "downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023336628s STEP: Saw pod success Sep 14 12:38:57.907: INFO: Pod "downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07" satisfied condition "Succeeded or Failed" Sep 14 12:38:57.909: INFO: Trying to get logs from node latest-worker pod downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07 container dapi-container: STEP: delete the pod Sep 14 12:38:57.980: INFO: Waiting for pod downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07 to disappear Sep 14 12:38:57.992: INFO: Pod downward-api-26a4f5f1-fe78-4fbe-ba28-4dd151097c07 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:38:57.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4440" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":141,"skipped":2139,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:38:58.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 14 12:38:58.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5398' Sep 14 12:38:58.424: INFO: stderr: "" Sep 14 12:38:58.424: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 14 12:38:58.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5398' Sep 14 12:38:58.548: INFO: stderr: "" Sep 14 12:38:58.548: INFO: stdout: "update-demo-nautilus-5hgz5 update-demo-nautilus-z7bdp " Sep 14 12:38:58.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5hgz5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5398' Sep 14 12:38:58.640: INFO: stderr: "" Sep 14 12:38:58.640: INFO: stdout: "" Sep 14 12:38:58.640: INFO: update-demo-nautilus-5hgz5 is created but not running Sep 14 12:39:03.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5398' Sep 14 12:39:03.755: INFO: stderr: "" Sep 14 12:39:03.755: INFO: stdout: "update-demo-nautilus-5hgz5 update-demo-nautilus-z7bdp " Sep 14 12:39:03.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5hgz5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5398' Sep 14 12:39:03.867: INFO: stderr: "" Sep 14 12:39:03.867: INFO: stdout: "true" Sep 14 12:39:03.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5hgz5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5398' Sep 14 12:39:03.968: INFO: stderr: "" Sep 14 12:39:03.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:39:03.968: INFO: validating pod update-demo-nautilus-5hgz5 Sep 14 12:39:03.973: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:39:03.973: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:39:03.973: INFO: update-demo-nautilus-5hgz5 is verified up and running Sep 14 12:39:03.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7bdp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5398' Sep 14 12:39:04.084: INFO: stderr: "" Sep 14 12:39:04.084: INFO: stdout: "true" Sep 14 12:39:04.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z7bdp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5398' Sep 14 12:39:04.183: INFO: stderr: "" Sep 14 12:39:04.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 14 12:39:04.184: INFO: validating pod update-demo-nautilus-z7bdp Sep 14 12:39:04.188: INFO: got data: { "image": "nautilus.jpg" } Sep 14 12:39:04.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 14 12:39:04.188: INFO: update-demo-nautilus-z7bdp is verified up and running STEP: using delete to clean up resources Sep 14 12:39:04.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5398' Sep 14 12:39:04.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 12:39:04.291: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 14 12:39:04.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5398' Sep 14 12:39:04.395: INFO: stderr: "No resources found in kubectl-5398 namespace.\n" Sep 14 12:39:04.395: INFO: stdout: "" Sep 14 12:39:04.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5398 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 14 12:39:04.495: INFO: stderr: "" Sep 14 12:39:04.495: INFO: stdout: "update-demo-nautilus-5hgz5\nupdate-demo-nautilus-z7bdp\n" Sep 14 12:39:04.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5398' Sep 14 12:39:05.091: INFO: stderr: "No resources found in kubectl-5398 namespace.\n" Sep 14 12:39:05.091: INFO: stdout: "" Sep 14 12:39:05.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5398 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 14 12:39:05.190: INFO: stderr: "" Sep 14 12:39:05.190: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:05.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5398" for this suite. • [SLOW TEST:7.197 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":142,"skipped":2143,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:05.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:09.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9380" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:09.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:39:09.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775" in namespace "downward-api-2425" to be "Succeeded or Failed" Sep 14 12:39:09.951: INFO: Pod "downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802818ms Sep 14 12:39:11.982: INFO: Pod "downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034878635s Sep 14 12:39:13.987: INFO: Pod "downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039731587s STEP: Saw pod success Sep 14 12:39:13.987: INFO: Pod "downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775" satisfied condition "Succeeded or Failed" Sep 14 12:39:13.990: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775 container client-container: STEP: delete the pod Sep 14 12:39:14.025: INFO: Waiting for pod downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775 to disappear Sep 14 12:39:14.035: INFO: Pod downwardapi-volume-29285c1e-38a6-4d72-b119-b38770e3b775 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:14.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2425" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2176,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:14.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 14 12:39:14.146: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:14.151: INFO: Number of nodes with available pods: 0 Sep 14 12:39:14.151: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:39:15.704: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:15.898: INFO: Number of nodes with available pods: 0 Sep 14 12:39:15.899: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:39:16.181: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:16.278: INFO: Number of nodes with available pods: 0 Sep 14 12:39:16.278: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:39:17.155: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:17.161: INFO: Number of nodes with available pods: 0 Sep 14 12:39:17.161: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:39:18.174: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:18.179: INFO: Number of nodes with available pods: 0 Sep 14 12:39:18.179: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:39:19.191: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:19.195: INFO: Number of nodes with available pods: 2 Sep 14 12:39:19.195: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 14 12:39:19.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:19.270: INFO: Number of nodes with available pods: 1 Sep 14 12:39:19.270: INFO: Node latest-worker2 is running more than one daemon pod Sep 14 12:39:20.276: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:20.279: INFO: Number of nodes with available pods: 1 Sep 14 12:39:20.279: INFO: Node latest-worker2 is running more than one daemon pod Sep 14 12:39:21.275: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:21.279: INFO: Number of nodes with available pods: 1 Sep 14 12:39:21.279: INFO: Node latest-worker2 is running more than one daemon pod Sep 14 12:39:22.278: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:22.282: INFO: Number of nodes with available pods: 1 Sep 14 12:39:22.282: INFO: Node latest-worker2 is running more than one daemon pod Sep 14 12:39:23.277: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:39:23.281: INFO: Number of nodes with available pods: 2 Sep 14 12:39:23.281: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1044, will wait for the garbage collector to delete the pods Sep 14 12:39:23.346: INFO: Deleting DaemonSet.extensions daemon-set took: 6.671823ms Sep 14 12:39:23.746: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.204617ms Sep 14 12:39:28.050: INFO: Number of nodes with available pods: 0 Sep 14 12:39:28.050: INFO: Number of running nodes: 0, number of available pods: 0 Sep 14 12:39:28.052: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1044/daemonsets","resourceVersion":"268871"},"items":null} Sep 14 12:39:28.055: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1044/pods","resourceVersion":"268871"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:28.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1044" for this suite. • [SLOW TEST:14.028 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":145,"skipped":2187,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:28.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:39:29.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:39:31.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683970, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 12:39:34.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683970, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683969, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:39:37.027: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:37.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5318" for this suite. STEP: Destroying namespace "webhook-5318-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.642 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":146,"skipped":2201,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:37.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:39:38.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:39:40.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683978, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683978, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683978, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683978, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:39:44.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:39:44.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7682-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:39:45.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2766" for this suite. STEP: Destroying namespace "webhook-2766-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":147,"skipped":2211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:39:45.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:39:45.992: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:39:48.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683985, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 12:39:50.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683986, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735683985, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:39:53.059: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:40:03.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3726" for this suite. STEP: Destroying namespace "webhook-3726-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.128 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":148,"skipped":2245,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:40:03.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:40:07.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907" in namespace "downward-api-325" to be "Succeeded or Failed" Sep 14 12:40:07.948: INFO: Pod "downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907": Phase="Pending", Reason="", readiness=false. Elapsed: 167.724607ms Sep 14 12:40:09.952: INFO: Pod "downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171613744s Sep 14 12:40:11.956: INFO: Pod "downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17549968s STEP: Saw pod success Sep 14 12:40:11.956: INFO: Pod "downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907" satisfied condition "Succeeded or Failed" Sep 14 12:40:11.969: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907 container client-container: STEP: delete the pod Sep 14 12:40:12.001: INFO: Waiting for pod downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907 to disappear Sep 14 12:40:12.033: INFO: Pod downwardapi-volume-36e1aecb-75d4-474e-9cf0-af66f3f1d907 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:40:12.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-325" for this suite. • [SLOW TEST:8.659 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2250,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:40:12.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 14 12:40:16.826: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8799 pod-service-account-2d5fec1b-0ed9-4367-b8c2-b545b748766f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 14 12:40:17.034: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8799 pod-service-account-2d5fec1b-0ed9-4367-b8c2-b545b748766f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 14 12:40:17.263: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8799 pod-service-account-2d5fec1b-0ed9-4367-b8c2-b545b748766f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:40:17.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8799" for this suite. • [SLOW TEST:5.469 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":150,"skipped":2254,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:40:17.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:40:17.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5735" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:40:17.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 14 12:40:22.338: INFO: Successfully updated pod "pod-update-0e92a1ed-e6f7-42ee-8b43-5f6a75fac948" STEP: verifying the updated pod is in kubernetes Sep 14 12:40:22.375: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:40:22.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5395" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:40:22.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8021 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 14 12:40:22.615: INFO: Found 0 stateful pods, waiting for 3 Sep 14 12:40:32.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:40:32.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:40:32.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Sep 14 12:40:42.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:40:42.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:40:42.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 14 12:40:42.652: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 14 12:40:52.792: INFO: Updating stateful set ss2 Sep 14 12:40:52.858: INFO: Waiting for Pod statefulset-8021/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 14 12:41:02.867: INFO: Waiting for Pod statefulset-8021/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 14 12:41:13.030: INFO: Found 2 stateful pods, waiting for 3 Sep 14 12:41:23.037: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:41:23.037: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 12:41:23.037: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 14 12:41:23.064: INFO: Updating stateful set ss2 Sep 14 12:41:23.128: INFO: Waiting for Pod statefulset-8021/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 14 12:41:33.140: INFO: Waiting for Pod statefulset-8021/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 14 12:41:43.157: INFO: Updating stateful set ss2 Sep 14 12:41:43.186: INFO: Waiting for StatefulSet statefulset-8021/ss2 to complete update Sep 14 12:41:43.186: INFO: Waiting for Pod statefulset-8021/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 14 12:41:53.195: INFO: Waiting for StatefulSet statefulset-8021/ss2 to complete update Sep 14 12:41:53.195: INFO: Waiting for Pod statefulset-8021/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 12:42:03.205: INFO: Deleting all statefulset in ns statefulset-8021 Sep 14 12:42:03.207: INFO: Scaling statefulset ss2 to 0 Sep 14 12:42:33.239: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 12:42:33.242: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:42:33.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8021" for this suite. • [SLOW TEST:130.878 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":153,"skipped":2324,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:42:33.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:42:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8402" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":154,"skipped":2339,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:42:33.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0914 12:43:14.359696 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 12:44:16.379: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 14 12:44:16.379: INFO: Deleting pod "simpletest.rc-86nbj" in namespace "gc-2437" Sep 14 12:44:16.450: INFO: Deleting pod "simpletest.rc-8dbwj" in namespace "gc-2437" Sep 14 12:44:16.506: INFO: Deleting pod "simpletest.rc-b72dx" in namespace "gc-2437" Sep 14 12:44:16.600: INFO: Deleting pod "simpletest.rc-fhxw6" in namespace "gc-2437" Sep 14 12:44:17.017: INFO: Deleting pod "simpletest.rc-hc2hs" in namespace "gc-2437" Sep 14 12:44:17.327: INFO: Deleting pod "simpletest.rc-jmdk6" in namespace "gc-2437" Sep 14 12:44:17.447: INFO: Deleting pod "simpletest.rc-k2dft" in namespace "gc-2437" Sep 14 12:44:17.644: INFO: Deleting pod "simpletest.rc-pj9v7" in namespace "gc-2437" Sep 14 12:44:17.897: INFO: Deleting pod "simpletest.rc-vvs8l" in namespace "gc-2437" Sep 14 12:44:18.203: INFO: Deleting pod "simpletest.rc-ztr7n" in namespace "gc-2437" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:44:18.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2437" for this suite. • [SLOW TEST:105.036 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":155,"skipped":2342,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:44:18.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:44:18.746: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 14 12:44:20.889: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:44:21.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6602" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":156,"skipped":2346,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:44:21.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:44:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-286" for this suite. • [SLOW TEST:18.927 seconds] [sig-apps] Job /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":157,"skipped":2359,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:44:40.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1e138e6f-0556-43b5-8ad8-27b2a65ffe0e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1e138e6f-0556-43b5-8ad8-27b2a65ffe0e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:45:51.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2320" for this suite. • [SLOW TEST:70.815 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2373,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:45:51.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:45:51.760: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 14 12:45:51.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:51.778: INFO: Number of nodes with available pods: 0 Sep 14 12:45:51.778: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:45:52.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:52.788: INFO: Number of nodes with available pods: 0 Sep 14 12:45:52.788: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:45:53.783: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:53.786: INFO: Number of nodes with available pods: 0 Sep 14 12:45:53.786: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:45:54.793: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:54.797: INFO: Number of nodes with available pods: 0 Sep 14 12:45:54.797: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:45:55.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:55.787: INFO: Number of nodes with available pods: 1 Sep 14 12:45:55.787: INFO: Node latest-worker2 is running more than one daemon pod Sep 14 12:45:56.798: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:56.801: INFO: Number of nodes with available pods: 2 Sep 14 12:45:56.801: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 14 12:45:57.183: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:57.183: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:57.188: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:58.192: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:58.192: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:58.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:45:59.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:59.193: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:45:59.193: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:45:59.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:00.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:00.193: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:00.193: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:00.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:01.194: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:01.194: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:01.194: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:01.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:02.194: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:02.194: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:02.194: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:02.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:03.197: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:03.197: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:03.197: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:03.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:04.194: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:04.194: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:04.194: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:04.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:05.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:05.193: INFO: Wrong image for pod: daemon-set-kxzn4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:05.193: INFO: Pod daemon-set-kxzn4 is not available Sep 14 12:46:05.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:06.192: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:06.192: INFO: Pod daemon-set-l4fbg is not available Sep 14 12:46:06.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:07.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:07.193: INFO: Pod daemon-set-l4fbg is not available Sep 14 12:46:07.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:08.192: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:08.192: INFO: Pod daemon-set-l4fbg is not available Sep 14 12:46:08.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:09.230: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:09.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:10.191: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:10.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:11.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:11.193: INFO: Pod daemon-set-fvs2d is not available Sep 14 12:46:11.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:12.205: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:12.205: INFO: Pod daemon-set-fvs2d is not available Sep 14 12:46:12.278: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:13.499: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:13.499: INFO: Pod daemon-set-fvs2d is not available Sep 14 12:46:13.503: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:14.193: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:14.193: INFO: Pod daemon-set-fvs2d is not available Sep 14 12:46:14.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:15.194: INFO: Wrong image for pod: daemon-set-fvs2d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 14 12:46:15.194: INFO: Pod daemon-set-fvs2d is not available Sep 14 12:46:15.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:16.193: INFO: Pod daemon-set-btkj6 is not available Sep 14 12:46:16.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 14 12:46:16.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:16.203: INFO: Number of nodes with available pods: 1 Sep 14 12:46:16.203: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:46:17.210: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:17.214: INFO: Number of nodes with available pods: 1 Sep 14 12:46:17.214: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:46:18.208: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:18.211: INFO: Number of nodes with available pods: 1 Sep 14 12:46:18.211: INFO: Node latest-worker is running more than one daemon pod Sep 14 12:46:19.214: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 14 12:46:19.216: INFO: Number of nodes with available pods: 2 Sep 14 12:46:19.216: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-765, will wait for the garbage collector to delete the pods Sep 14 12:46:19.286: INFO: Deleting DaemonSet.extensions daemon-set took: 7.32737ms Sep 14 12:46:19.786: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.301531ms Sep 14 12:46:25.690: INFO: Number of nodes with available pods: 0 Sep 14 12:46:25.690: INFO: Number of running nodes: 0, number of available pods: 0 Sep 14 12:46:25.693: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-765/daemonsets","resourceVersion":"271224"},"items":null} Sep 14 12:46:25.696: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-765/pods","resourceVersion":"271224"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:25.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-765" for this suite. • [SLOW TEST:34.062 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":159,"skipped":2389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:25.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-023b8829-99d3-4655-a5e4-3701ec20b97e STEP: Creating a pod to test consume configMaps Sep 14 12:46:25.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a" in namespace "projected-6567" to be "Succeeded or Failed" Sep 14 12:46:25.829: INFO: Pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.740195ms Sep 14 12:46:27.834: INFO: Pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024002015s Sep 14 12:46:29.839: INFO: Pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a": Phase="Running", Reason="", readiness=true. Elapsed: 4.029253841s Sep 14 12:46:31.843: INFO: Pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033411306s STEP: Saw pod success Sep 14 12:46:31.844: INFO: Pod "pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a" satisfied condition "Succeeded or Failed" Sep 14 12:46:31.847: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a container projected-configmap-volume-test: STEP: delete the pod Sep 14 12:46:31.885: INFO: Waiting for pod pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a to disappear Sep 14 12:46:31.911: INFO: Pod pod-projected-configmaps-612fc723-9e23-42bf-ab26-7bcd5a30e46a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:31.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6567" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2419,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:31.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:46:31.985: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946" in namespace "security-context-test-4720" to be "Succeeded or Failed" Sep 14 12:46:31.989: INFO: Pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946": Phase="Pending", Reason="", readiness=false. Elapsed: 3.880855ms Sep 14 12:46:33.995: INFO: Pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009825638s Sep 14 12:46:36.000: INFO: Pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014695093s Sep 14 12:46:36.000: INFO: Pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946" satisfied condition "Succeeded or Failed" Sep 14 12:46:36.006: INFO: Got logs for pod "busybox-privileged-false-9cb37921-1dff-40d2-bd0f-d0b6e0d11946": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:36.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4720" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2427,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:36.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 14 12:46:36.092: INFO: Waiting up to 5m0s for pod "downward-api-229549d3-056c-47ff-b016-f2284d89a639" in namespace "downward-api-4268" to be "Succeeded or Failed" Sep 14 12:46:36.097: INFO: Pod "downward-api-229549d3-056c-47ff-b016-f2284d89a639": Phase="Pending", Reason="", readiness=false. Elapsed: 5.1195ms Sep 14 12:46:38.102: INFO: Pod "downward-api-229549d3-056c-47ff-b016-f2284d89a639": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009797543s Sep 14 12:46:40.108: INFO: Pod "downward-api-229549d3-056c-47ff-b016-f2284d89a639": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016195384s STEP: Saw pod success Sep 14 12:46:40.108: INFO: Pod "downward-api-229549d3-056c-47ff-b016-f2284d89a639" satisfied condition "Succeeded or Failed" Sep 14 12:46:40.111: INFO: Trying to get logs from node latest-worker2 pod downward-api-229549d3-056c-47ff-b016-f2284d89a639 container dapi-container: STEP: delete the pod Sep 14 12:46:40.243: INFO: Waiting for pod downward-api-229549d3-056c-47ff-b016-f2284d89a639 to disappear Sep 14 12:46:40.258: INFO: Pod downward-api-229549d3-056c-47ff-b016-f2284d89a639 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:40.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4268" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2430,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:40.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:46:41.121: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:46:43.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684401, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684401, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684401, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684401, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:46:46.399: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:46.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9651" for this suite. STEP: Destroying namespace "webhook-9651-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.401 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":163,"skipped":2435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:46.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-a9c6e7c0-b2e1-470e-85c3-ac1b20bdb6d4 STEP: Creating a pod to test consume secrets Sep 14 12:46:46.778: INFO: Waiting up to 5m0s for pod "pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8" in namespace "secrets-6521" to be "Succeeded or Failed" Sep 14 12:46:46.791: INFO: Pod "pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.417035ms Sep 14 12:46:48.796: INFO: Pod "pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017282916s Sep 14 12:46:50.799: INFO: Pod "pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02086308s STEP: Saw pod success Sep 14 12:46:50.799: INFO: Pod "pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8" satisfied condition "Succeeded or Failed" Sep 14 12:46:50.801: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8 container secret-volume-test: STEP: delete the pod Sep 14 12:46:50.818: INFO: Waiting for pod pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8 to disappear Sep 14 12:46:50.822: INFO: Pod pod-secrets-4215b881-5715-4b89-ae12-eab0b2ed49e8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:50.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6521" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2464,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:50.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 14 12:46:50.898: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:46:57.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-951" for this suite. • [SLOW TEST:6.347 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":165,"skipped":2471,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:46:57.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4019 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4019 STEP: creating replication controller externalsvc in namespace services-4019 I0914 12:46:57.572942 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4019, replica count: 2 I0914 12:47:00.623315 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:47:03.623569 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 14 12:47:03.691: INFO: Creating new exec pod Sep 14 12:47:07.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-4019 execpodqh44r -- /bin/sh -x -c nslookup clusterip-service.services-4019.svc.cluster.local' Sep 14 12:47:11.832: INFO: stderr: "I0914 12:47:11.742440 2095 log.go:181] (0xc0005ccdc0) (0xc000ae3180) Create stream\nI0914 12:47:11.742508 2095 log.go:181] (0xc0005ccdc0) (0xc000ae3180) Stream added, broadcasting: 1\nI0914 12:47:11.748403 2095 log.go:181] (0xc0005ccdc0) Reply frame received for 1\nI0914 12:47:11.748455 2095 log.go:181] (0xc0005ccdc0) (0xc0008e8000) Create stream\nI0914 12:47:11.748469 2095 log.go:181] (0xc0005ccdc0) (0xc0008e8000) Stream added, broadcasting: 3\nI0914 12:47:11.749355 2095 log.go:181] (0xc0005ccdc0) Reply frame received for 3\nI0914 12:47:11.749386 2095 log.go:181] (0xc0005ccdc0) (0xc000208000) Create stream\nI0914 12:47:11.749395 2095 log.go:181] (0xc0005ccdc0) (0xc000208000) Stream added, broadcasting: 5\nI0914 12:47:11.750174 2095 log.go:181] (0xc0005ccdc0) Reply frame received for 5\nI0914 12:47:11.813603 2095 log.go:181] (0xc0005ccdc0) Data frame received for 5\nI0914 12:47:11.813643 2095 log.go:181] (0xc000208000) (5) Data frame handling\nI0914 12:47:11.813672 2095 log.go:181] (0xc000208000) (5) Data frame sent\n+ nslookup clusterip-service.services-4019.svc.cluster.local\nI0914 12:47:11.824352 2095 log.go:181] (0xc0005ccdc0) Data frame received for 3\nI0914 12:47:11.824370 2095 log.go:181] (0xc0008e8000) (3) Data frame handling\nI0914 12:47:11.824383 2095 log.go:181] (0xc0008e8000) (3) Data frame sent\nI0914 12:47:11.825082 2095 log.go:181] (0xc0005ccdc0) Data frame received for 3\nI0914 12:47:11.825098 2095 log.go:181] (0xc0008e8000) (3) Data frame handling\nI0914 12:47:11.825111 2095 log.go:181] (0xc0008e8000) (3) Data frame sent\nI0914 12:47:11.825528 2095 log.go:181] (0xc0005ccdc0) Data frame received for 3\nI0914 12:47:11.825539 2095 log.go:181] (0xc0008e8000) (3) Data frame handling\nI0914 12:47:11.825750 2095 log.go:181] (0xc0005ccdc0) Data frame received for 5\nI0914 12:47:11.825768 2095 log.go:181] (0xc000208000) (5) Data frame handling\nI0914 12:47:11.827899 2095 log.go:181] (0xc0005ccdc0) Data frame received for 1\nI0914 12:47:11.827915 2095 log.go:181] (0xc000ae3180) (1) Data frame handling\nI0914 12:47:11.827921 2095 log.go:181] (0xc000ae3180) (1) Data frame sent\nI0914 12:47:11.827928 2095 log.go:181] (0xc0005ccdc0) (0xc000ae3180) Stream removed, broadcasting: 1\nI0914 12:47:11.827939 2095 log.go:181] (0xc0005ccdc0) Go away received\nI0914 12:47:11.828453 2095 log.go:181] (0xc0005ccdc0) (0xc000ae3180) Stream removed, broadcasting: 1\nI0914 12:47:11.828479 2095 log.go:181] (0xc0005ccdc0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0914 12:47:11.828491 2095 log.go:181] (0xc0005ccdc0) (0xc000208000) Stream removed, broadcasting: 5\n" Sep 14 12:47:11.832: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4019.svc.cluster.local\tcanonical name = externalsvc.services-4019.svc.cluster.local.\nName:\texternalsvc.services-4019.svc.cluster.local\nAddress: 10.105.246.233\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4019, will wait for the garbage collector to delete the pods Sep 14 12:47:11.893: INFO: Deleting ReplicationController externalsvc took: 6.898134ms Sep 14 12:47:11.993: INFO: Terminating ReplicationController externalsvc pods took: 100.236347ms Sep 14 12:47:16.243: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:47:16.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4019" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:19.091 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":166,"skipped":2477,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:47:16.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 14 12:47:19.364: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:47:19.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1960" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2481,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:47:19.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 14 12:47:20.031: INFO: Waiting up to 5m0s for pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8" in namespace "emptydir-5511" to be "Succeeded or Failed" Sep 14 12:47:20.080: INFO: Pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8": Phase="Pending", Reason="", readiness=false. Elapsed: 48.791961ms Sep 14 12:47:22.086: INFO: Pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054903733s Sep 14 12:47:24.090: INFO: Pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059039813s Sep 14 12:47:26.095: INFO: Pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064433839s STEP: Saw pod success Sep 14 12:47:26.096: INFO: Pod "pod-49fb7356-0e76-4171-91b7-2eaa312fafd8" satisfied condition "Succeeded or Failed" Sep 14 12:47:26.099: INFO: Trying to get logs from node latest-worker2 pod pod-49fb7356-0e76-4171-91b7-2eaa312fafd8 container test-container: STEP: delete the pod Sep 14 12:47:26.137: INFO: Waiting for pod pod-49fb7356-0e76-4171-91b7-2eaa312fafd8 to disappear Sep 14 12:47:26.152: INFO: Pod pod-49fb7356-0e76-4171-91b7-2eaa312fafd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:47:26.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5511" for this suite. • [SLOW TEST:6.393 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2483,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:47:26.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6b601365-a684-4614-8e8f-254c7ac7f1d6 STEP: Creating a pod to test consume secrets Sep 14 12:47:26.310: INFO: Waiting up to 5m0s for pod "pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36" in namespace "secrets-3047" to be "Succeeded or Failed" Sep 14 12:47:26.326: INFO: Pod "pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035551ms Sep 14 12:47:28.330: INFO: Pod "pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020131015s Sep 14 12:47:30.334: INFO: Pod "pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024005982s STEP: Saw pod success Sep 14 12:47:30.334: INFO: Pod "pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36" satisfied condition "Succeeded or Failed" Sep 14 12:47:30.338: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36 container secret-env-test: STEP: delete the pod Sep 14 12:47:30.357: INFO: Waiting for pod pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36 to disappear Sep 14 12:47:30.373: INFO: Pod pod-secrets-8e84a5ed-e0c9-468e-8565-746936e31a36 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:47:30.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3047" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:47:30.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:47:30.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 14 12:47:31.136: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:47:31Z]] name:name1 resourceVersion:271805 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3245e9d8-1854-4958-bf08-1e46e6ff3551] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 14 12:47:41.146: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:41Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:47:41Z]] name:name2 resourceVersion:271852 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bcf2ddea-a6b2-4ba9-a4e0-9f947a81754a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 14 12:47:51.154: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:47:51Z]] name:name1 resourceVersion:271882 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3245e9d8-1854-4958-bf08-1e46e6ff3551] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 14 12:48:01.162: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:48:01Z]] name:name2 resourceVersion:271912 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bcf2ddea-a6b2-4ba9-a4e0-9f947a81754a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 14 12:48:11.172: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:47:51Z]] name:name1 resourceVersion:271942 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3245e9d8-1854-4958-bf08-1e46e6ff3551] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 14 12:48:21.181: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-14T12:47:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-14T12:48:01Z]] name:name2 resourceVersion:271972 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bcf2ddea-a6b2-4ba9-a4e0-9f947a81754a] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:48:31.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5446" for this suite. • [SLOW TEST:61.317 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":170,"skipped":2575,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:48:31.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 14 12:48:31.761: INFO: Waiting up to 5m0s for pod "pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547" in namespace "emptydir-7250" to be "Succeeded or Failed" Sep 14 12:48:31.775: INFO: Pod "pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547": Phase="Pending", Reason="", readiness=false. Elapsed: 14.057156ms Sep 14 12:48:33.865: INFO: Pod "pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103474446s Sep 14 12:48:35.869: INFO: Pod "pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107972388s STEP: Saw pod success Sep 14 12:48:35.869: INFO: Pod "pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547" satisfied condition "Succeeded or Failed" Sep 14 12:48:35.872: INFO: Trying to get logs from node latest-worker2 pod pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547 container test-container: STEP: delete the pod Sep 14 12:48:35.908: INFO: Waiting for pod pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547 to disappear Sep 14 12:48:35.930: INFO: Pod pod-5960f9aa-9d12-47ca-bbf4-3d0b6913e547 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:48:35.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7250" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2583,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:48:35.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:48:36.000: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 14 12:48:36.015: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 14 12:48:41.039: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 14 12:48:41.039: INFO: Creating deployment "test-rolling-update-deployment" Sep 14 12:48:41.057: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 14 12:48:41.069: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 14 12:48:43.150: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 14 12:48:43.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684521, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684521, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684521, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684521, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 12:48:45.320: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 14 12:48:45.340: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4626 /apis/apps/v1/namespaces/deployment-4626/deployments/test-rolling-update-deployment fe76b7e8-ca9f-4e92-83a4-35e3508c9b60 272118 1 2020-09-14 12:48:41 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-14 12:48:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 12:48:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044da388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-14 12:48:41 +0000 UTC,LastTransitionTime:2020-09-14 12:48:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-14 12:48:44 +0000 UTC,LastTransitionTime:2020-09-14 12:48:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 14 12:48:45.342: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-4626 /apis/apps/v1/namespaces/deployment-4626/replicasets/test-rolling-update-deployment-c4cb8d6d9 0f35fe95-436c-440b-ba67-601aa5b053d6 272107 1 2020-09-14 12:48:41 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fe76b7e8-ca9f-4e92-83a4-35e3508c9b60 0xc0044da8d0 0xc0044da8d1}] [] [{kube-controller-manager Update apps/v1 2020-09-14 12:48:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe76b7e8-ca9f-4e92-83a4-35e3508c9b60\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044da948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:48:45.342: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 14 12:48:45.342: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4626 /apis/apps/v1/namespaces/deployment-4626/replicasets/test-rolling-update-controller ac61cf8e-015d-4f00-bcef-6e71e66ec71d 272117 2 2020-09-14 12:48:36 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fe76b7e8-ca9f-4e92-83a4-35e3508c9b60 0xc0044da7c7 0xc0044da7c8}] [] [{e2e.test Update apps/v1 2020-09-14 12:48:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 12:48:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe76b7e8-ca9f-4e92-83a4-35e3508c9b60\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0044da868 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 12:48:45.344: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-lz4bp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-lz4bp test-rolling-update-deployment-c4cb8d6d9- deployment-4626 /api/v1/namespaces/deployment-4626/pods/test-rolling-update-deployment-c4cb8d6d9-lz4bp c88b950e-750a-4c52-971c-d5a75cd7edbc 272106 0 2020-09-14 12:48:41 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 0f35fe95-436c-440b-ba67-601aa5b053d6 0xc0044dade0 0xc0044dade1}] [] [{kube-controller-manager Update v1 2020-09-14 12:48:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f35fe95-436c-440b-ba67-601aa5b053d6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 12:48:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n2nlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n2nlf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n2nlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:48:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:48:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 12:48:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.196,StartTime:2020-09-14 12:48:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 12:48:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://aa80566875a2fe28ab5f8aa514ef2de93377803270ae06eb51e1d0027dd28c78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:48:45.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4626" for this suite. • [SLOW TEST:9.411 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":172,"skipped":2585,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:48:45.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 14 12:50:45.967: INFO: Successfully updated pod "var-expansion-4373ff94-bc82-46e6-8b09-39e16102db3e" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 14 12:50:48.031: INFO: Deleting pod "var-expansion-4373ff94-bc82-46e6-8b09-39e16102db3e" in namespace "var-expansion-9962" Sep 14 12:50:48.037: INFO: Wait up to 5m0s for pod "var-expansion-4373ff94-bc82-46e6-8b09-39e16102db3e" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:51:22.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9962" for this suite. • [SLOW TEST:156.720 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":173,"skipped":2586,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:51:22.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3911 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 14 12:51:22.126: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 14 12:51:22.193: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:51:24.198: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:51:26.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:28.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:30.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:32.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:34.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:36.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:51:38.199: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 14 12:51:38.205: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 12:51:40.210: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 12:51:42.209: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 14 12:51:46.295: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.219 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3911 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:51:46.295: INFO: >>> kubeConfig: /root/.kube/config I0914 12:51:46.331550 7 log.go:181] (0xc000954580) (0xc003836aa0) Create stream I0914 12:51:46.331574 7 log.go:181] (0xc000954580) (0xc003836aa0) Stream added, broadcasting: 1 I0914 12:51:46.334812 7 log.go:181] (0xc000954580) Reply frame received for 1 I0914 12:51:46.334890 7 log.go:181] (0xc000954580) (0xc003dcf860) Create stream I0914 12:51:46.334945 7 log.go:181] (0xc000954580) (0xc003dcf860) Stream added, broadcasting: 3 I0914 12:51:46.336853 7 log.go:181] (0xc000954580) Reply frame received for 3 I0914 12:51:46.336920 7 log.go:181] (0xc000954580) (0xc003a48140) Create stream I0914 12:51:46.336955 7 log.go:181] (0xc000954580) (0xc003a48140) Stream added, broadcasting: 5 I0914 12:51:46.338072 7 log.go:181] (0xc000954580) Reply frame received for 5 I0914 12:51:47.419138 7 log.go:181] (0xc000954580) Data frame received for 5 I0914 12:51:47.419180 7 log.go:181] (0xc003a48140) (5) Data frame handling I0914 12:51:47.419213 7 log.go:181] (0xc000954580) Data frame received for 3 I0914 12:51:47.419230 7 log.go:181] (0xc003dcf860) (3) Data frame handling I0914 12:51:47.419248 7 log.go:181] (0xc003dcf860) (3) Data frame sent I0914 12:51:47.419261 7 log.go:181] (0xc000954580) Data frame received for 3 I0914 12:51:47.419274 7 log.go:181] (0xc003dcf860) (3) Data frame handling I0914 12:51:47.421991 7 log.go:181] (0xc000954580) Data frame received for 1 I0914 12:51:47.422023 7 log.go:181] (0xc003836aa0) (1) Data frame handling I0914 12:51:47.422049 7 log.go:181] (0xc003836aa0) (1) Data frame sent I0914 12:51:47.422087 7 log.go:181] (0xc000954580) (0xc003836aa0) Stream removed, broadcasting: 1 I0914 12:51:47.422135 7 log.go:181] (0xc000954580) Go away received I0914 12:51:47.422251 7 log.go:181] (0xc000954580) (0xc003836aa0) Stream removed, broadcasting: 1 I0914 12:51:47.422287 7 log.go:181] (0xc000954580) (0xc003dcf860) Stream removed, broadcasting: 3 I0914 12:51:47.422329 7 log.go:181] (0xc000954580) (0xc003a48140) Stream removed, broadcasting: 5 Sep 14 12:51:47.422: INFO: Found all expected endpoints: [netserver-0] Sep 14 12:51:47.425: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.198 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3911 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:51:47.426: INFO: >>> kubeConfig: /root/.kube/config I0914 12:51:47.458891 7 log.go:181] (0xc000613ad0) (0xc003a48640) Create stream I0914 12:51:47.458919 7 log.go:181] (0xc000613ad0) (0xc003a48640) Stream added, broadcasting: 1 I0914 12:51:47.460719 7 log.go:181] (0xc000613ad0) Reply frame received for 1 I0914 12:51:47.460788 7 log.go:181] (0xc000613ad0) (0xc0014dbae0) Create stream I0914 12:51:47.460804 7 log.go:181] (0xc000613ad0) (0xc0014dbae0) Stream added, broadcasting: 3 I0914 12:51:47.461788 7 log.go:181] (0xc000613ad0) Reply frame received for 3 I0914 12:51:47.461822 7 log.go:181] (0xc000613ad0) (0xc003836b40) Create stream I0914 12:51:47.461845 7 log.go:181] (0xc000613ad0) (0xc003836b40) Stream added, broadcasting: 5 I0914 12:51:47.462689 7 log.go:181] (0xc000613ad0) Reply frame received for 5 I0914 12:51:48.514123 7 log.go:181] (0xc000613ad0) Data frame received for 3 I0914 12:51:48.514160 7 log.go:181] (0xc0014dbae0) (3) Data frame handling I0914 12:51:48.514176 7 log.go:181] (0xc0014dbae0) (3) Data frame sent I0914 12:51:48.514303 7 log.go:181] (0xc000613ad0) Data frame received for 3 I0914 12:51:48.514359 7 log.go:181] (0xc0014dbae0) (3) Data frame handling I0914 12:51:48.514413 7 log.go:181] (0xc000613ad0) Data frame received for 5 I0914 12:51:48.514440 7 log.go:181] (0xc003836b40) (5) Data frame handling I0914 12:51:48.516668 7 log.go:181] (0xc000613ad0) Data frame received for 1 I0914 12:51:48.516720 7 log.go:181] (0xc003a48640) (1) Data frame handling I0914 12:51:48.516750 7 log.go:181] (0xc003a48640) (1) Data frame sent I0914 12:51:48.516768 7 log.go:181] (0xc000613ad0) (0xc003a48640) Stream removed, broadcasting: 1 I0914 12:51:48.516792 7 log.go:181] (0xc000613ad0) Go away received I0914 12:51:48.516928 7 log.go:181] (0xc000613ad0) (0xc003a48640) Stream removed, broadcasting: 1 I0914 12:51:48.516958 7 log.go:181] (0xc000613ad0) (0xc0014dbae0) Stream removed, broadcasting: 3 I0914 12:51:48.516971 7 log.go:181] (0xc000613ad0) (0xc003836b40) Stream removed, broadcasting: 5 Sep 14 12:51:48.516: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:51:48.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3911" for this suite. • [SLOW TEST:26.459 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2591,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:51:48.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:51:49.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:51:51.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684709, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684709, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684709, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684709, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:51:54.190: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 14 12:51:54.492: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:51:54.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5900" for this suite. STEP: Destroying namespace "webhook-5900-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.411 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":175,"skipped":2608,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:51:54.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 14 12:51:55.350: INFO: Waiting up to 5m0s for pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0" in namespace "emptydir-6708" to be "Succeeded or Failed" Sep 14 12:51:55.531: INFO: Pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 181.057649ms Sep 14 12:51:58.322: INFO: Pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972250279s Sep 14 12:52:00.385: INFO: Pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.035410232s Sep 14 12:52:02.389: INFO: Pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.039737041s STEP: Saw pod success Sep 14 12:52:02.389: INFO: Pod "pod-7128a099-a35d-4e50-813c-51c5668e7ed0" satisfied condition "Succeeded or Failed" Sep 14 12:52:02.392: INFO: Trying to get logs from node latest-worker2 pod pod-7128a099-a35d-4e50-813c-51c5668e7ed0 container test-container: STEP: delete the pod Sep 14 12:52:02.428: INFO: Waiting for pod pod-7128a099-a35d-4e50-813c-51c5668e7ed0 to disappear Sep 14 12:52:02.432: INFO: Pod pod-7128a099-a35d-4e50-813c-51c5668e7ed0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:02.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6708" for this suite. • [SLOW TEST:7.520 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2613,"failed":0} SSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:02.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 14 12:52:02.618: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:02.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7397" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":177,"skipped":2618,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:02.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:52:03.420: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:52:05.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 12:52:07.436: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684723, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:52:10.535: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:52:10.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-285-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:11.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7838" for this suite. STEP: Destroying namespace "webhook-7838-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.160 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":178,"skipped":2624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:11.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 14 12:52:22.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:22.284: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:24.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:24.289: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:26.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:26.288: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:28.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:28.289: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:30.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:30.289: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:32.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:32.289: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:34.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:34.290: INFO: Pod pod-with-prestop-http-hook still exists Sep 14 12:52:36.284: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 14 12:52:36.289: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:36.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-911" for this suite. • [SLOW TEST:24.486 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:36.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 14 12:52:36.389: INFO: Waiting up to 5m0s for pod "downward-api-90b00958-dfc0-4451-b1fc-1566171613f2" in namespace "downward-api-5457" to be "Succeeded or Failed" Sep 14 12:52:36.411: INFO: Pod "downward-api-90b00958-dfc0-4451-b1fc-1566171613f2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.15926ms Sep 14 12:52:38.429: INFO: Pod "downward-api-90b00958-dfc0-4451-b1fc-1566171613f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039643204s Sep 14 12:52:40.433: INFO: Pod "downward-api-90b00958-dfc0-4451-b1fc-1566171613f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044027102s STEP: Saw pod success Sep 14 12:52:40.433: INFO: Pod "downward-api-90b00958-dfc0-4451-b1fc-1566171613f2" satisfied condition "Succeeded or Failed" Sep 14 12:52:40.436: INFO: Trying to get logs from node latest-worker2 pod downward-api-90b00958-dfc0-4451-b1fc-1566171613f2 container dapi-container: STEP: delete the pod Sep 14 12:52:40.470: INFO: Waiting for pod downward-api-90b00958-dfc0-4451-b1fc-1566171613f2 to disappear Sep 14 12:52:40.479: INFO: Pod downward-api-90b00958-dfc0-4451-b1fc-1566171613f2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:40.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5457" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":2676,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:40.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:40.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3284" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":181,"skipped":2695,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:40.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-448afef9-f22e-4c3f-8445-6e5dac492473 STEP: Creating a pod to test consume configMaps Sep 14 12:52:40.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f" in namespace "configmap-5289" to be "Succeeded or Failed" Sep 14 12:52:40.752: INFO: Pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.245121ms Sep 14 12:52:42.783: INFO: Pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047661231s Sep 14 12:52:44.787: INFO: Pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f": Phase="Running", Reason="", readiness=true. Elapsed: 4.052453724s Sep 14 12:52:46.792: INFO: Pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056728297s STEP: Saw pod success Sep 14 12:52:46.792: INFO: Pod "pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f" satisfied condition "Succeeded or Failed" Sep 14 12:52:46.795: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f container configmap-volume-test: STEP: delete the pod Sep 14 12:52:46.826: INFO: Waiting for pod pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f to disappear Sep 14 12:52:46.839: INFO: Pod pod-configmaps-295a41a7-ea88-48ef-ad57-2483682ce09f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:46.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5289" for this suite. • [SLOW TEST:6.245 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":2697,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:46.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 14 12:52:47.672: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 14 12:52:49.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 12:52:51.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735684767, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:52:54.735: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:52:54.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:52:57.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1901" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:10.226 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":183,"skipped":2704,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:52:57.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 12:52:57.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708" in namespace "projected-8853" to be "Succeeded or Failed" Sep 14 12:52:57.291: INFO: Pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708": Phase="Pending", Reason="", readiness=false. Elapsed: 101.895531ms Sep 14 12:52:59.297: INFO: Pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10750998s Sep 14 12:53:01.453: INFO: Pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263902994s Sep 14 12:53:03.457: INFO: Pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.267545053s STEP: Saw pod success Sep 14 12:53:03.457: INFO: Pod "downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708" satisfied condition "Succeeded or Failed" Sep 14 12:53:03.459: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708 container client-container: STEP: delete the pod Sep 14 12:53:03.524: INFO: Waiting for pod downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708 to disappear Sep 14 12:53:03.541: INFO: Pod downwardapi-volume-0d8179b7-acce-4d1d-bfa3-8284851e1708 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:03.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8853" for this suite. • [SLOW TEST:6.455 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":2704,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:03.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1f39dcec-8cd6-4631-aa95-4a0f35cacfac STEP: Creating a pod to test consume secrets Sep 14 12:53:03.651: INFO: Waiting up to 5m0s for pod "pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c" in namespace "secrets-9633" to be "Succeeded or Failed" Sep 14 12:53:03.654: INFO: Pod "pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.083166ms Sep 14 12:53:05.658: INFO: Pod "pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007215119s Sep 14 12:53:07.663: INFO: Pod "pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011521s STEP: Saw pod success Sep 14 12:53:07.663: INFO: Pod "pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c" satisfied condition "Succeeded or Failed" Sep 14 12:53:07.666: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c container secret-volume-test: STEP: delete the pod Sep 14 12:53:07.698: INFO: Waiting for pod pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c to disappear Sep 14 12:53:07.709: INFO: Pod pod-secrets-ff3f04c1-3e68-4619-b883-2b64c4c1588c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:07.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9633" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":2710,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:07.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-8svr STEP: Creating a pod to test atomic-volume-subpath Sep 14 12:53:07.801: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8svr" in namespace "subpath-6106" to be "Succeeded or Failed" Sep 14 12:53:07.825: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Pending", Reason="", readiness=false. Elapsed: 23.627556ms Sep 14 12:53:09.829: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028130195s Sep 14 12:53:11.833: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 4.032339242s Sep 14 12:53:13.837: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 6.035730984s Sep 14 12:53:15.852: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 8.051173757s Sep 14 12:53:17.855: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 10.054534622s Sep 14 12:53:19.890: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 12.089415486s Sep 14 12:53:21.894: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 14.093509121s Sep 14 12:53:23.898: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 16.097234472s Sep 14 12:53:25.902: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 18.100977304s Sep 14 12:53:27.926: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 20.125434379s Sep 14 12:53:29.956: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Running", Reason="", readiness=true. Elapsed: 22.155481187s Sep 14 12:53:31.974: INFO: Pod "pod-subpath-test-projected-8svr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.173097545s STEP: Saw pod success Sep 14 12:53:31.974: INFO: Pod "pod-subpath-test-projected-8svr" satisfied condition "Succeeded or Failed" Sep 14 12:53:31.977: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-8svr container test-container-subpath-projected-8svr: STEP: delete the pod Sep 14 12:53:32.007: INFO: Waiting for pod pod-subpath-test-projected-8svr to disappear Sep 14 12:53:32.014: INFO: Pod pod-subpath-test-projected-8svr no longer exists STEP: Deleting pod pod-subpath-test-projected-8svr Sep 14 12:53:32.014: INFO: Deleting pod "pod-subpath-test-projected-8svr" in namespace "subpath-6106" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:32.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6106" for this suite. • [SLOW TEST:24.311 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":186,"skipped":2715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:32.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:53:32.109: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:33.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6956" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":187,"skipped":2768,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:33.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:53:33.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7020" for this suite. • [SLOW TEST:6.705 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":188,"skipped":2768,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:39.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 14 12:53:43.995: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:44.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5958" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":189,"skipped":2769,"failed":0} SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:44.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 14 12:53:54.178: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.178: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.219678 7 log.go:181] (0xc0039704d0) (0xc00395f2c0) Create stream I0914 12:53:54.219712 7 log.go:181] (0xc0039704d0) (0xc00395f2c0) Stream added, broadcasting: 1 I0914 12:53:54.221901 7 log.go:181] (0xc0039704d0) Reply frame received for 1 I0914 12:53:54.221944 7 log.go:181] (0xc0039704d0) (0xc003ddcbe0) Create stream I0914 12:53:54.221957 7 log.go:181] (0xc0039704d0) (0xc003ddcbe0) Stream added, broadcasting: 3 I0914 12:53:54.223089 7 log.go:181] (0xc0039704d0) Reply frame received for 3 I0914 12:53:54.223133 7 log.go:181] (0xc0039704d0) (0xc00395f360) Create stream I0914 12:53:54.223148 7 log.go:181] (0xc0039704d0) (0xc00395f360) Stream added, broadcasting: 5 I0914 12:53:54.224286 7 log.go:181] (0xc0039704d0) Reply frame received for 5 I0914 12:53:54.309429 7 log.go:181] (0xc0039704d0) Data frame received for 3 I0914 12:53:54.309457 7 log.go:181] (0xc003ddcbe0) (3) Data frame handling I0914 12:53:54.309468 7 log.go:181] (0xc003ddcbe0) (3) Data frame sent I0914 12:53:54.309473 7 log.go:181] (0xc0039704d0) Data frame received for 3 I0914 12:53:54.309478 7 log.go:181] (0xc003ddcbe0) (3) Data frame handling I0914 12:53:54.309550 7 log.go:181] (0xc0039704d0) Data frame received for 5 I0914 12:53:54.309580 7 log.go:181] (0xc00395f360) (5) Data frame handling I0914 12:53:54.310885 7 log.go:181] (0xc0039704d0) Data frame received for 1 I0914 12:53:54.310907 7 log.go:181] (0xc00395f2c0) (1) Data frame handling I0914 12:53:54.310919 7 log.go:181] (0xc00395f2c0) (1) Data frame sent I0914 12:53:54.310929 7 log.go:181] (0xc0039704d0) (0xc00395f2c0) Stream removed, broadcasting: 1 I0914 12:53:54.310951 7 log.go:181] (0xc0039704d0) Go away received I0914 12:53:54.311022 7 log.go:181] (0xc0039704d0) (0xc00395f2c0) Stream removed, broadcasting: 1 I0914 12:53:54.311040 7 log.go:181] (0xc0039704d0) (0xc003ddcbe0) Stream removed, broadcasting: 3 I0914 12:53:54.311050 7 log.go:181] (0xc0039704d0) (0xc00395f360) Stream removed, broadcasting: 5 Sep 14 12:53:54.311: INFO: Exec stderr: "" Sep 14 12:53:54.311: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.311: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.341657 7 log.go:181] (0xc003970bb0) (0xc00395f5e0) Create stream I0914 12:53:54.341698 7 log.go:181] (0xc003970bb0) (0xc00395f5e0) Stream added, broadcasting: 1 I0914 12:53:54.346056 7 log.go:181] (0xc003970bb0) Reply frame received for 1 I0914 12:53:54.346084 7 log.go:181] (0xc003970bb0) (0xc00237f180) Create stream I0914 12:53:54.346091 7 log.go:181] (0xc003970bb0) (0xc00237f180) Stream added, broadcasting: 3 I0914 12:53:54.347021 7 log.go:181] (0xc003970bb0) Reply frame received for 3 I0914 12:53:54.347049 7 log.go:181] (0xc003970bb0) (0xc00401adc0) Create stream I0914 12:53:54.347062 7 log.go:181] (0xc003970bb0) (0xc00401adc0) Stream added, broadcasting: 5 I0914 12:53:54.347738 7 log.go:181] (0xc003970bb0) Reply frame received for 5 I0914 12:53:54.413305 7 log.go:181] (0xc003970bb0) Data frame received for 5 I0914 12:53:54.413340 7 log.go:181] (0xc00401adc0) (5) Data frame handling I0914 12:53:54.413361 7 log.go:181] (0xc003970bb0) Data frame received for 3 I0914 12:53:54.413371 7 log.go:181] (0xc00237f180) (3) Data frame handling I0914 12:53:54.413386 7 log.go:181] (0xc00237f180) (3) Data frame sent I0914 12:53:54.413403 7 log.go:181] (0xc003970bb0) Data frame received for 3 I0914 12:53:54.413415 7 log.go:181] (0xc00237f180) (3) Data frame handling I0914 12:53:54.414518 7 log.go:181] (0xc003970bb0) Data frame received for 1 I0914 12:53:54.414548 7 log.go:181] (0xc00395f5e0) (1) Data frame handling I0914 12:53:54.414563 7 log.go:181] (0xc00395f5e0) (1) Data frame sent I0914 12:53:54.414600 7 log.go:181] (0xc003970bb0) (0xc00395f5e0) Stream removed, broadcasting: 1 I0914 12:53:54.414615 7 log.go:181] (0xc003970bb0) Go away received I0914 12:53:54.414737 7 log.go:181] (0xc003970bb0) (0xc00395f5e0) Stream removed, broadcasting: 1 I0914 12:53:54.414766 7 log.go:181] (0xc003970bb0) (0xc00237f180) Stream removed, broadcasting: 3 I0914 12:53:54.414790 7 log.go:181] (0xc003970bb0) (0xc00401adc0) Stream removed, broadcasting: 5 Sep 14 12:53:54.414: INFO: Exec stderr: "" Sep 14 12:53:54.414: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.414: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.448397 7 log.go:181] (0xc003971290) (0xc00395f860) Create stream I0914 12:53:54.448431 7 log.go:181] (0xc003971290) (0xc00395f860) Stream added, broadcasting: 1 I0914 12:53:54.450148 7 log.go:181] (0xc003971290) Reply frame received for 1 I0914 12:53:54.450194 7 log.go:181] (0xc003971290) (0xc003e61e00) Create stream I0914 12:53:54.450212 7 log.go:181] (0xc003971290) (0xc003e61e00) Stream added, broadcasting: 3 I0914 12:53:54.451057 7 log.go:181] (0xc003971290) Reply frame received for 3 I0914 12:53:54.451096 7 log.go:181] (0xc003971290) (0xc003e61ea0) Create stream I0914 12:53:54.451124 7 log.go:181] (0xc003971290) (0xc003e61ea0) Stream added, broadcasting: 5 I0914 12:53:54.451915 7 log.go:181] (0xc003971290) Reply frame received for 5 I0914 12:53:54.513087 7 log.go:181] (0xc003971290) Data frame received for 5 I0914 12:53:54.513146 7 log.go:181] (0xc003e61ea0) (5) Data frame handling I0914 12:53:54.513183 7 log.go:181] (0xc003971290) Data frame received for 3 I0914 12:53:54.513203 7 log.go:181] (0xc003e61e00) (3) Data frame handling I0914 12:53:54.513262 7 log.go:181] (0xc003e61e00) (3) Data frame sent I0914 12:53:54.513287 7 log.go:181] (0xc003971290) Data frame received for 3 I0914 12:53:54.513307 7 log.go:181] (0xc003e61e00) (3) Data frame handling I0914 12:53:54.514802 7 log.go:181] (0xc003971290) Data frame received for 1 I0914 12:53:54.514837 7 log.go:181] (0xc00395f860) (1) Data frame handling I0914 12:53:54.514865 7 log.go:181] (0xc00395f860) (1) Data frame sent I0914 12:53:54.514895 7 log.go:181] (0xc003971290) (0xc00395f860) Stream removed, broadcasting: 1 I0914 12:53:54.514920 7 log.go:181] (0xc003971290) Go away received I0914 12:53:54.515029 7 log.go:181] (0xc003971290) (0xc00395f860) Stream removed, broadcasting: 1 I0914 12:53:54.515055 7 log.go:181] (0xc003971290) (0xc003e61e00) Stream removed, broadcasting: 3 I0914 12:53:54.515075 7 log.go:181] (0xc003971290) (0xc003e61ea0) Stream removed, broadcasting: 5 Sep 14 12:53:54.515: INFO: Exec stderr: "" Sep 14 12:53:54.515: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.515: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.547946 7 log.go:181] (0xc003971970) (0xc00395fae0) Create stream I0914 12:53:54.547969 7 log.go:181] (0xc003971970) (0xc00395fae0) Stream added, broadcasting: 1 I0914 12:53:54.549783 7 log.go:181] (0xc003971970) Reply frame received for 1 I0914 12:53:54.549825 7 log.go:181] (0xc003971970) (0xc00237f220) Create stream I0914 12:53:54.549842 7 log.go:181] (0xc003971970) (0xc00237f220) Stream added, broadcasting: 3 I0914 12:53:54.550639 7 log.go:181] (0xc003971970) Reply frame received for 3 I0914 12:53:54.550662 7 log.go:181] (0xc003971970) (0xc00395fb80) Create stream I0914 12:53:54.550668 7 log.go:181] (0xc003971970) (0xc00395fb80) Stream added, broadcasting: 5 I0914 12:53:54.551389 7 log.go:181] (0xc003971970) Reply frame received for 5 I0914 12:53:54.612734 7 log.go:181] (0xc003971970) Data frame received for 5 I0914 12:53:54.612771 7 log.go:181] (0xc00395fb80) (5) Data frame handling I0914 12:53:54.612803 7 log.go:181] (0xc003971970) Data frame received for 3 I0914 12:53:54.612819 7 log.go:181] (0xc00237f220) (3) Data frame handling I0914 12:53:54.612839 7 log.go:181] (0xc00237f220) (3) Data frame sent I0914 12:53:54.612850 7 log.go:181] (0xc003971970) Data frame received for 3 I0914 12:53:54.612860 7 log.go:181] (0xc00237f220) (3) Data frame handling I0914 12:53:54.614034 7 log.go:181] (0xc003971970) Data frame received for 1 I0914 12:53:54.614100 7 log.go:181] (0xc00395fae0) (1) Data frame handling I0914 12:53:54.614136 7 log.go:181] (0xc00395fae0) (1) Data frame sent I0914 12:53:54.614163 7 log.go:181] (0xc003971970) (0xc00395fae0) Stream removed, broadcasting: 1 I0914 12:53:54.614184 7 log.go:181] (0xc003971970) Go away received I0914 12:53:54.614314 7 log.go:181] (0xc003971970) (0xc00395fae0) Stream removed, broadcasting: 1 I0914 12:53:54.614348 7 log.go:181] (0xc003971970) (0xc00237f220) Stream removed, broadcasting: 3 I0914 12:53:54.614380 7 log.go:181] (0xc003971970) (0xc00395fb80) Stream removed, broadcasting: 5 Sep 14 12:53:54.614: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 14 12:53:54.614: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.614: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.647995 7 log.go:181] (0xc00433a0b0) (0xc00395fe00) Create stream I0914 12:53:54.648028 7 log.go:181] (0xc00433a0b0) (0xc00395fe00) Stream added, broadcasting: 1 I0914 12:53:54.650803 7 log.go:181] (0xc00433a0b0) Reply frame received for 1 I0914 12:53:54.650851 7 log.go:181] (0xc00433a0b0) (0xc00237f2c0) Create stream I0914 12:53:54.650872 7 log.go:181] (0xc00433a0b0) (0xc00237f2c0) Stream added, broadcasting: 3 I0914 12:53:54.651654 7 log.go:181] (0xc00433a0b0) Reply frame received for 3 I0914 12:53:54.651696 7 log.go:181] (0xc00433a0b0) (0xc00401ae60) Create stream I0914 12:53:54.651713 7 log.go:181] (0xc00433a0b0) (0xc00401ae60) Stream added, broadcasting: 5 I0914 12:53:54.652846 7 log.go:181] (0xc00433a0b0) Reply frame received for 5 I0914 12:53:54.718208 7 log.go:181] (0xc00433a0b0) Data frame received for 3 I0914 12:53:54.718244 7 log.go:181] (0xc00237f2c0) (3) Data frame handling I0914 12:53:54.718254 7 log.go:181] (0xc00237f2c0) (3) Data frame sent I0914 12:53:54.718260 7 log.go:181] (0xc00433a0b0) Data frame received for 3 I0914 12:53:54.718268 7 log.go:181] (0xc00237f2c0) (3) Data frame handling I0914 12:53:54.718341 7 log.go:181] (0xc00433a0b0) Data frame received for 5 I0914 12:53:54.718372 7 log.go:181] (0xc00401ae60) (5) Data frame handling I0914 12:53:54.719454 7 log.go:181] (0xc00433a0b0) Data frame received for 1 I0914 12:53:54.719476 7 log.go:181] (0xc00395fe00) (1) Data frame handling I0914 12:53:54.719491 7 log.go:181] (0xc00395fe00) (1) Data frame sent I0914 12:53:54.719505 7 log.go:181] (0xc00433a0b0) (0xc00395fe00) Stream removed, broadcasting: 1 I0914 12:53:54.719521 7 log.go:181] (0xc00433a0b0) Go away received I0914 12:53:54.719651 7 log.go:181] (0xc00433a0b0) (0xc00395fe00) Stream removed, broadcasting: 1 I0914 12:53:54.719674 7 log.go:181] (0xc00433a0b0) (0xc00237f2c0) Stream removed, broadcasting: 3 I0914 12:53:54.719690 7 log.go:181] (0xc00433a0b0) (0xc00401ae60) Stream removed, broadcasting: 5 Sep 14 12:53:54.719: INFO: Exec stderr: "" Sep 14 12:53:54.719: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.719: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.747320 7 log.go:181] (0xc00433a790) (0xc0043c00a0) Create stream I0914 12:53:54.747341 7 log.go:181] (0xc00433a790) (0xc0043c00a0) Stream added, broadcasting: 1 I0914 12:53:54.750028 7 log.go:181] (0xc00433a790) Reply frame received for 1 I0914 12:53:54.750083 7 log.go:181] (0xc00433a790) (0xc00237f360) Create stream I0914 12:53:54.750102 7 log.go:181] (0xc00433a790) (0xc00237f360) Stream added, broadcasting: 3 I0914 12:53:54.751083 7 log.go:181] (0xc00433a790) Reply frame received for 3 I0914 12:53:54.751118 7 log.go:181] (0xc00433a790) (0xc003ddcc80) Create stream I0914 12:53:54.751132 7 log.go:181] (0xc00433a790) (0xc003ddcc80) Stream added, broadcasting: 5 I0914 12:53:54.752116 7 log.go:181] (0xc00433a790) Reply frame received for 5 I0914 12:53:54.817427 7 log.go:181] (0xc00433a790) Data frame received for 5 I0914 12:53:54.817452 7 log.go:181] (0xc003ddcc80) (5) Data frame handling I0914 12:53:54.817477 7 log.go:181] (0xc00433a790) Data frame received for 3 I0914 12:53:54.817523 7 log.go:181] (0xc00237f360) (3) Data frame handling I0914 12:53:54.817580 7 log.go:181] (0xc00237f360) (3) Data frame sent I0914 12:53:54.818437 7 log.go:181] (0xc00433a790) Data frame received for 3 I0914 12:53:54.818511 7 log.go:181] (0xc00237f360) (3) Data frame handling I0914 12:53:54.820942 7 log.go:181] (0xc00433a790) Data frame received for 1 I0914 12:53:54.820973 7 log.go:181] (0xc0043c00a0) (1) Data frame handling I0914 12:53:54.820998 7 log.go:181] (0xc0043c00a0) (1) Data frame sent I0914 12:53:54.821025 7 log.go:181] (0xc00433a790) (0xc0043c00a0) Stream removed, broadcasting: 1 I0914 12:53:54.821107 7 log.go:181] (0xc00433a790) Go away received I0914 12:53:54.821147 7 log.go:181] (0xc00433a790) (0xc0043c00a0) Stream removed, broadcasting: 1 I0914 12:53:54.821169 7 log.go:181] (0xc00433a790) (0xc00237f360) Stream removed, broadcasting: 3 I0914 12:53:54.821205 7 log.go:181] (0xc00433a790) (0xc003ddcc80) Stream removed, broadcasting: 5 Sep 14 12:53:54.821: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 14 12:53:54.821: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.821: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.853501 7 log.go:181] (0xc0040582c0) (0xc00401b0e0) Create stream I0914 12:53:54.853524 7 log.go:181] (0xc0040582c0) (0xc00401b0e0) Stream added, broadcasting: 1 I0914 12:53:54.855596 7 log.go:181] (0xc0040582c0) Reply frame received for 1 I0914 12:53:54.855634 7 log.go:181] (0xc0040582c0) (0xc003e61f40) Create stream I0914 12:53:54.855653 7 log.go:181] (0xc0040582c0) (0xc003e61f40) Stream added, broadcasting: 3 I0914 12:53:54.856610 7 log.go:181] (0xc0040582c0) Reply frame received for 3 I0914 12:53:54.856647 7 log.go:181] (0xc0040582c0) (0xc003ddcdc0) Create stream I0914 12:53:54.856661 7 log.go:181] (0xc0040582c0) (0xc003ddcdc0) Stream added, broadcasting: 5 I0914 12:53:54.857554 7 log.go:181] (0xc0040582c0) Reply frame received for 5 I0914 12:53:54.925043 7 log.go:181] (0xc0040582c0) Data frame received for 5 I0914 12:53:54.925157 7 log.go:181] (0xc003ddcdc0) (5) Data frame handling I0914 12:53:54.925201 7 log.go:181] (0xc0040582c0) Data frame received for 3 I0914 12:53:54.925218 7 log.go:181] (0xc003e61f40) (3) Data frame handling I0914 12:53:54.925245 7 log.go:181] (0xc003e61f40) (3) Data frame sent I0914 12:53:54.925259 7 log.go:181] (0xc0040582c0) Data frame received for 3 I0914 12:53:54.925272 7 log.go:181] (0xc003e61f40) (3) Data frame handling I0914 12:53:54.926935 7 log.go:181] (0xc0040582c0) Data frame received for 1 I0914 12:53:54.926963 7 log.go:181] (0xc00401b0e0) (1) Data frame handling I0914 12:53:54.926989 7 log.go:181] (0xc00401b0e0) (1) Data frame sent I0914 12:53:54.927016 7 log.go:181] (0xc0040582c0) (0xc00401b0e0) Stream removed, broadcasting: 1 I0914 12:53:54.927115 7 log.go:181] (0xc0040582c0) (0xc00401b0e0) Stream removed, broadcasting: 1 I0914 12:53:54.927133 7 log.go:181] (0xc0040582c0) (0xc003e61f40) Stream removed, broadcasting: 3 I0914 12:53:54.927276 7 log.go:181] (0xc0040582c0) Go away received I0914 12:53:54.927481 7 log.go:181] (0xc0040582c0) (0xc003ddcdc0) Stream removed, broadcasting: 5 Sep 14 12:53:54.927: INFO: Exec stderr: "" Sep 14 12:53:54.927: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:54.927: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:54.956350 7 log.go:181] (0xc00448e210) (0xc003ddd4a0) Create stream I0914 12:53:54.956369 7 log.go:181] (0xc00448e210) (0xc003ddd4a0) Stream added, broadcasting: 1 I0914 12:53:54.958467 7 log.go:181] (0xc00448e210) Reply frame received for 1 I0914 12:53:54.958506 7 log.go:181] (0xc00448e210) (0xc003ddd540) Create stream I0914 12:53:54.958520 7 log.go:181] (0xc00448e210) (0xc003ddd540) Stream added, broadcasting: 3 I0914 12:53:54.959415 7 log.go:181] (0xc00448e210) Reply frame received for 3 I0914 12:53:54.959439 7 log.go:181] (0xc00448e210) (0xc00401b180) Create stream I0914 12:53:54.959447 7 log.go:181] (0xc00448e210) (0xc00401b180) Stream added, broadcasting: 5 I0914 12:53:54.960506 7 log.go:181] (0xc00448e210) Reply frame received for 5 I0914 12:53:55.021648 7 log.go:181] (0xc00448e210) Data frame received for 3 I0914 12:53:55.021686 7 log.go:181] (0xc003ddd540) (3) Data frame handling I0914 12:53:55.021708 7 log.go:181] (0xc00448e210) Data frame received for 5 I0914 12:53:55.021754 7 log.go:181] (0xc00401b180) (5) Data frame handling I0914 12:53:55.021787 7 log.go:181] (0xc003ddd540) (3) Data frame sent I0914 12:53:55.021875 7 log.go:181] (0xc00448e210) Data frame received for 3 I0914 12:53:55.021897 7 log.go:181] (0xc003ddd540) (3) Data frame handling I0914 12:53:55.023398 7 log.go:181] (0xc00448e210) Data frame received for 1 I0914 12:53:55.023433 7 log.go:181] (0xc003ddd4a0) (1) Data frame handling I0914 12:53:55.023462 7 log.go:181] (0xc003ddd4a0) (1) Data frame sent I0914 12:53:55.023489 7 log.go:181] (0xc00448e210) (0xc003ddd4a0) Stream removed, broadcasting: 1 I0914 12:53:55.023599 7 log.go:181] (0xc00448e210) (0xc003ddd4a0) Stream removed, broadcasting: 1 I0914 12:53:55.023638 7 log.go:181] (0xc00448e210) (0xc003ddd540) Stream removed, broadcasting: 3 I0914 12:53:55.023656 7 log.go:181] (0xc00448e210) (0xc00401b180) Stream removed, broadcasting: 5 Sep 14 12:53:55.023: INFO: Exec stderr: "" I0914 12:53:55.023725 7 log.go:181] (0xc00448e210) Go away received Sep 14 12:53:55.023: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:55.023: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:55.062126 7 log.go:181] (0xc000954b00) (0xc00237f900) Create stream I0914 12:53:55.062154 7 log.go:181] (0xc000954b00) (0xc00237f900) Stream added, broadcasting: 1 I0914 12:53:55.064213 7 log.go:181] (0xc000954b00) Reply frame received for 1 I0914 12:53:55.064265 7 log.go:181] (0xc000954b00) (0xc00458c000) Create stream I0914 12:53:55.064285 7 log.go:181] (0xc000954b00) (0xc00458c000) Stream added, broadcasting: 3 I0914 12:53:55.065312 7 log.go:181] (0xc000954b00) Reply frame received for 3 I0914 12:53:55.065346 7 log.go:181] (0xc000954b00) (0xc00237f9a0) Create stream I0914 12:53:55.065365 7 log.go:181] (0xc000954b00) (0xc00237f9a0) Stream added, broadcasting: 5 I0914 12:53:55.066434 7 log.go:181] (0xc000954b00) Reply frame received for 5 I0914 12:53:55.134058 7 log.go:181] (0xc000954b00) Data frame received for 5 I0914 12:53:55.134106 7 log.go:181] (0xc000954b00) Data frame received for 3 I0914 12:53:55.134157 7 log.go:181] (0xc00458c000) (3) Data frame handling I0914 12:53:55.134187 7 log.go:181] (0xc00458c000) (3) Data frame sent I0914 12:53:55.134213 7 log.go:181] (0xc000954b00) Data frame received for 3 I0914 12:53:55.134242 7 log.go:181] (0xc00237f9a0) (5) Data frame handling I0914 12:53:55.134309 7 log.go:181] (0xc00458c000) (3) Data frame handling I0914 12:53:55.135695 7 log.go:181] (0xc000954b00) Data frame received for 1 I0914 12:53:55.135731 7 log.go:181] (0xc00237f900) (1) Data frame handling I0914 12:53:55.135756 7 log.go:181] (0xc00237f900) (1) Data frame sent I0914 12:53:55.135777 7 log.go:181] (0xc000954b00) (0xc00237f900) Stream removed, broadcasting: 1 I0914 12:53:55.135899 7 log.go:181] (0xc000954b00) (0xc00237f900) Stream removed, broadcasting: 1 I0914 12:53:55.135935 7 log.go:181] (0xc000954b00) (0xc00458c000) Stream removed, broadcasting: 3 I0914 12:53:55.135951 7 log.go:181] (0xc000954b00) (0xc00237f9a0) Stream removed, broadcasting: 5 Sep 14 12:53:55.135: INFO: Exec stderr: "" Sep 14 12:53:55.135: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9406 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:53:55.136: INFO: >>> kubeConfig: /root/.kube/config I0914 12:53:55.136019 7 log.go:181] (0xc000954b00) Go away received I0914 12:53:55.170366 7 log.go:181] (0xc00433ae70) (0xc0043c0320) Create stream I0914 12:53:55.170392 7 log.go:181] (0xc00433ae70) (0xc0043c0320) Stream added, broadcasting: 1 I0914 12:53:55.172322 7 log.go:181] (0xc00433ae70) Reply frame received for 1 I0914 12:53:55.172363 7 log.go:181] (0xc00433ae70) (0xc00237fa40) Create stream I0914 12:53:55.172371 7 log.go:181] (0xc00433ae70) (0xc00237fa40) Stream added, broadcasting: 3 I0914 12:53:55.173196 7 log.go:181] (0xc00433ae70) Reply frame received for 3 I0914 12:53:55.173248 7 log.go:181] (0xc00433ae70) (0xc00458c0a0) Create stream I0914 12:53:55.173265 7 log.go:181] (0xc00433ae70) (0xc00458c0a0) Stream added, broadcasting: 5 I0914 12:53:55.174163 7 log.go:181] (0xc00433ae70) Reply frame received for 5 I0914 12:53:55.241932 7 log.go:181] (0xc00433ae70) Data frame received for 5 I0914 12:53:55.242007 7 log.go:181] (0xc00458c0a0) (5) Data frame handling I0914 12:53:55.242055 7 log.go:181] (0xc00433ae70) Data frame received for 3 I0914 12:53:55.242088 7 log.go:181] (0xc00237fa40) (3) Data frame handling I0914 12:53:55.242133 7 log.go:181] (0xc00237fa40) (3) Data frame sent I0914 12:53:55.242174 7 log.go:181] (0xc00433ae70) Data frame received for 3 I0914 12:53:55.242194 7 log.go:181] (0xc00237fa40) (3) Data frame handling I0914 12:53:55.242856 7 log.go:181] (0xc00433ae70) Data frame received for 1 I0914 12:53:55.242877 7 log.go:181] (0xc0043c0320) (1) Data frame handling I0914 12:53:55.242891 7 log.go:181] (0xc0043c0320) (1) Data frame sent I0914 12:53:55.242980 7 log.go:181] (0xc00433ae70) (0xc0043c0320) Stream removed, broadcasting: 1 I0914 12:53:55.243019 7 log.go:181] (0xc00433ae70) Go away received I0914 12:53:55.243112 7 log.go:181] (0xc00433ae70) (0xc0043c0320) Stream removed, broadcasting: 1 I0914 12:53:55.243152 7 log.go:181] (0xc00433ae70) (0xc00237fa40) Stream removed, broadcasting: 3 I0914 12:53:55.243175 7 log.go:181] (0xc00433ae70) (0xc00458c0a0) Stream removed, broadcasting: 5 Sep 14 12:53:55.243: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:55.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9406" for this suite. • [SLOW TEST:11.211 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2774,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:55.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:53:55.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7173" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":191,"skipped":2779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:53:55.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-583 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 14 12:53:55.464: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 14 12:53:55.564: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:53:57.568: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 12:53:59.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:01.573: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:03.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:05.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:07.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:09.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:11.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 12:54:13.568: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 14 12:54:13.574: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 12:54:15.579: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 14 12:54:19.603: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:8080/dial?request=hostname&protocol=http&host=10.244.1.220&port=8080&tries=1'] Namespace:pod-network-test-583 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:54:19.603: INFO: >>> kubeConfig: /root/.kube/config I0914 12:54:19.637568 7 log.go:181] (0xc000954580) (0xc003245680) Create stream I0914 12:54:19.637596 7 log.go:181] (0xc000954580) (0xc003245680) Stream added, broadcasting: 1 I0914 12:54:19.639271 7 log.go:181] (0xc000954580) Reply frame received for 1 I0914 12:54:19.639295 7 log.go:181] (0xc000954580) (0xc00482f540) Create stream I0914 12:54:19.639303 7 log.go:181] (0xc000954580) (0xc00482f540) Stream added, broadcasting: 3 I0914 12:54:19.640116 7 log.go:181] (0xc000954580) Reply frame received for 3 I0914 12:54:19.640227 7 log.go:181] (0xc000954580) (0xc00472b680) Create stream I0914 12:54:19.640245 7 log.go:181] (0xc000954580) (0xc00472b680) Stream added, broadcasting: 5 I0914 12:54:19.641075 7 log.go:181] (0xc000954580) Reply frame received for 5 I0914 12:54:19.736730 7 log.go:181] (0xc000954580) Data frame received for 3 I0914 12:54:19.736760 7 log.go:181] (0xc00482f540) (3) Data frame handling I0914 12:54:19.736782 7 log.go:181] (0xc00482f540) (3) Data frame sent I0914 12:54:19.737533 7 log.go:181] (0xc000954580) Data frame received for 3 I0914 12:54:19.737563 7 log.go:181] (0xc00482f540) (3) Data frame handling I0914 12:54:19.737588 7 log.go:181] (0xc000954580) Data frame received for 5 I0914 12:54:19.737602 7 log.go:181] (0xc00472b680) (5) Data frame handling I0914 12:54:19.739516 7 log.go:181] (0xc000954580) Data frame received for 1 I0914 12:54:19.739536 7 log.go:181] (0xc003245680) (1) Data frame handling I0914 12:54:19.739549 7 log.go:181] (0xc003245680) (1) Data frame sent I0914 12:54:19.739562 7 log.go:181] (0xc000954580) (0xc003245680) Stream removed, broadcasting: 1 I0914 12:54:19.739636 7 log.go:181] (0xc000954580) Go away received I0914 12:54:19.739676 7 log.go:181] (0xc000954580) (0xc003245680) Stream removed, broadcasting: 1 I0914 12:54:19.739695 7 log.go:181] (0xc000954580) (0xc00482f540) Stream removed, broadcasting: 3 I0914 12:54:19.739707 7 log.go:181] (0xc000954580) (0xc00472b680) Stream removed, broadcasting: 5 Sep 14 12:54:19.739: INFO: Waiting for responses: map[] Sep 14 12:54:19.743: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.221:8080/dial?request=hostname&protocol=http&host=10.244.2.213&port=8080&tries=1'] Namespace:pod-network-test-583 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 12:54:19.743: INFO: >>> kubeConfig: /root/.kube/config I0914 12:54:19.775880 7 log.go:181] (0xc00433a4d0) (0xc00482fa40) Create stream I0914 12:54:19.775903 7 log.go:181] (0xc00433a4d0) (0xc00482fa40) Stream added, broadcasting: 1 I0914 12:54:19.777668 7 log.go:181] (0xc00433a4d0) Reply frame received for 1 I0914 12:54:19.777695 7 log.go:181] (0xc00433a4d0) (0xc00482fae0) Create stream I0914 12:54:19.777712 7 log.go:181] (0xc00433a4d0) (0xc00482fae0) Stream added, broadcasting: 3 I0914 12:54:19.778561 7 log.go:181] (0xc00433a4d0) Reply frame received for 3 I0914 12:54:19.778602 7 log.go:181] (0xc00433a4d0) (0xc0040b5680) Create stream I0914 12:54:19.778616 7 log.go:181] (0xc00433a4d0) (0xc0040b5680) Stream added, broadcasting: 5 I0914 12:54:19.779831 7 log.go:181] (0xc00433a4d0) Reply frame received for 5 I0914 12:54:19.859303 7 log.go:181] (0xc00433a4d0) Data frame received for 3 I0914 12:54:19.859333 7 log.go:181] (0xc00482fae0) (3) Data frame handling I0914 12:54:19.859363 7 log.go:181] (0xc00482fae0) (3) Data frame sent I0914 12:54:19.859992 7 log.go:181] (0xc00433a4d0) Data frame received for 3 I0914 12:54:19.860025 7 log.go:181] (0xc00482fae0) (3) Data frame handling I0914 12:54:19.860079 7 log.go:181] (0xc00433a4d0) Data frame received for 5 I0914 12:54:19.860093 7 log.go:181] (0xc0040b5680) (5) Data frame handling I0914 12:54:19.862181 7 log.go:181] (0xc00433a4d0) Data frame received for 1 I0914 12:54:19.862201 7 log.go:181] (0xc00482fa40) (1) Data frame handling I0914 12:54:19.862222 7 log.go:181] (0xc00482fa40) (1) Data frame sent I0914 12:54:19.862238 7 log.go:181] (0xc00433a4d0) (0xc00482fa40) Stream removed, broadcasting: 1 I0914 12:54:19.862254 7 log.go:181] (0xc00433a4d0) Go away received I0914 12:54:19.862401 7 log.go:181] (0xc00433a4d0) (0xc00482fa40) Stream removed, broadcasting: 1 I0914 12:54:19.862424 7 log.go:181] (0xc00433a4d0) (0xc00482fae0) Stream removed, broadcasting: 3 I0914 12:54:19.862446 7 log.go:181] (0xc00433a4d0) (0xc0040b5680) Stream removed, broadcasting: 5 Sep 14 12:54:19.862: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:54:19.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-583" for this suite. • [SLOW TEST:24.492 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":2898,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:54:19.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 14 12:54:19.924: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 14 12:54:19.937: INFO: Waiting for terminating namespaces to be deleted... Sep 14 12:54:19.940: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 14 12:54:19.946: INFO: coredns-f9fd979d6-rckh5 from kube-system started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container coredns ready: true, restart count 0 Sep 14 12:54:19.946: INFO: coredns-f9fd979d6-rtr7c from kube-system started at 2020-09-13 17:00:07 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container coredns ready: true, restart count 0 Sep 14 12:54:19.946: INFO: kindnet-x9kfh from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:54:19.946: INFO: kube-proxy-484ff from kube-system started at 2020-09-13 16:59:36 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:54:19.946: INFO: local-path-provisioner-78776bfc44-ks8gr from local-path-storage started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 14 12:54:19.946: INFO: netserver-0 from pod-network-test-583 started at 2020-09-14 12:53:55 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container webserver ready: true, restart count 0 Sep 14 12:54:19.946: INFO: test-container-pod from pod-network-test-583 started at 2020-09-14 12:54:15 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.946: INFO: Container webserver ready: true, restart count 0 Sep 14 12:54:19.946: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 14 12:54:19.952: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-9406 started at 2020-09-14 12:53:50 +0000 UTC (2 container statuses recorded) Sep 14 12:54:19.952: INFO: Container busybox-1 ready: true, restart count 0 Sep 14 12:54:19.952: INFO: Container busybox-2 ready: true, restart count 0 Sep 14 12:54:19.952: INFO: test-pod from e2e-kubelet-etc-hosts-9406 started at 2020-09-14 12:53:44 +0000 UTC (3 container statuses recorded) Sep 14 12:54:19.952: INFO: Container busybox-1 ready: true, restart count 0 Sep 14 12:54:19.952: INFO: Container busybox-2 ready: true, restart count 0 Sep 14 12:54:19.952: INFO: Container busybox-3 ready: true, restart count 0 Sep 14 12:54:19.952: INFO: kindnet-6mthj from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.952: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:54:19.952: INFO: kube-proxy-thrnr from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.952: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:54:19.952: INFO: netserver-1 from pod-network-test-583 started at 2020-09-14 12:53:55 +0000 UTC (1 container statuses recorded) Sep 14 12:54:19.952: INFO: Container webserver ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b5c0625f-50c7-42c1-9620-783d6e373f27 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-b5c0625f-50c7-42c1-9620-783d6e373f27 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b5c0625f-50c7-42c1-9620-783d6e373f27 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:54:40.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8944" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.345 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":193,"skipped":2907,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:54:40.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 14 12:54:40.326: INFO: Waiting up to 5m0s for pod "pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6" in namespace "emptydir-2728" to be "Succeeded or Failed" Sep 14 12:54:40.329: INFO: Pod "pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483038ms Sep 14 12:54:42.332: INFO: Pod "pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005854569s Sep 14 12:54:44.340: INFO: Pod "pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013872965s STEP: Saw pod success Sep 14 12:54:44.340: INFO: Pod "pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6" satisfied condition "Succeeded or Failed" Sep 14 12:54:44.343: INFO: Trying to get logs from node latest-worker2 pod pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6 container test-container: STEP: delete the pod Sep 14 12:54:44.371: INFO: Waiting for pod pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6 to disappear Sep 14 12:54:44.382: INFO: Pod pod-9ba0b3a4-5d14-4682-8f7f-21244607d7e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:54:44.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2728" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":2919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:54:44.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 in namespace container-probe-7725 Sep 14 12:54:50.764: INFO: Started pod liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 in namespace container-probe-7725 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 12:54:50.766: INFO: Initial restart count of pod liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is 0 Sep 14 12:55:07.851: INFO: Restart count of pod container-probe-7725/liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is now 1 (17.084497187s elapsed) Sep 14 12:55:25.894: INFO: Restart count of pod container-probe-7725/liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is now 2 (35.127143946s elapsed) Sep 14 12:55:45.942: INFO: Restart count of pod container-probe-7725/liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is now 3 (55.175370099s elapsed) Sep 14 12:56:05.994: INFO: Restart count of pod container-probe-7725/liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is now 4 (1m15.227319882s elapsed) Sep 14 12:57:10.259: INFO: Restart count of pod container-probe-7725/liveness-3a1bc27e-67ae-4465-bd2b-92446fe0df32 is now 5 (2m19.492869854s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:10.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7725" for this suite. • [SLOW TEST:145.892 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":2943,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:10.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 14 12:57:15.286: INFO: Successfully updated pod "labelsupdatededcfb6e-8ecb-4a0b-bf9d-e9d07c03c758" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:17.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2537" for this suite. • [SLOW TEST:7.028 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":2967,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:17.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 14 12:57:17.389: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 14 12:57:17.407: INFO: Waiting for terminating namespaces to be deleted... Sep 14 12:57:17.409: INFO: Logging pods the apiserver thinks is on node latest-worker before test Sep 14 12:57:17.415: INFO: coredns-f9fd979d6-rckh5 from kube-system started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.415: INFO: Container coredns ready: true, restart count 0 Sep 14 12:57:17.415: INFO: coredns-f9fd979d6-rtr7c from kube-system started at 2020-09-13 17:00:07 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.415: INFO: Container coredns ready: true, restart count 0 Sep 14 12:57:17.415: INFO: kindnet-x9kfh from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.415: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:57:17.415: INFO: kube-proxy-484ff from kube-system started at 2020-09-13 16:59:36 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.415: INFO: Container kube-proxy ready: true, restart count 0 Sep 14 12:57:17.415: INFO: local-path-provisioner-78776bfc44-ks8gr from local-path-storage started at 2020-09-13 16:59:56 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.415: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 14 12:57:17.415: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Sep 14 12:57:17.419: INFO: labelsupdatededcfb6e-8ecb-4a0b-bf9d-e9d07c03c758 from downward-api-2537 started at 2020-09-14 12:57:10 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.419: INFO: Container client-container ready: true, restart count 0 Sep 14 12:57:17.419: INFO: kindnet-6mthj from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.419: INFO: Container kindnet-cni ready: true, restart count 0 Sep 14 12:57:17.419: INFO: kube-proxy-thrnr from kube-system started at 2020-09-13 16:59:37 +0000 UTC (1 container statuses recorded) Sep 14 12:57:17.419: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7ac70233-565e-4fba-8e39-7762109c52bc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7ac70233-565e-4fba-8e39-7762109c52bc off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7ac70233-565e-4fba-8e39-7762109c52bc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:27.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-397" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.304 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":197,"skipped":2971,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:27.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:57:28.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:57:30.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685048, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685048, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685048, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685048, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:57:33.389: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-479" for this suite. STEP: Destroying namespace "webhook-479-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.276 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":198,"skipped":2978,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:33.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ddca118f-b7e3-4d34-aae3-ba7b5cdabff7 STEP: Creating a pod to test consume secrets Sep 14 12:57:34.035: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa" in namespace "projected-6318" to be "Succeeded or Failed" Sep 14 12:57:34.039: INFO: Pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650768ms Sep 14 12:57:36.058: INFO: Pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022607187s Sep 14 12:57:38.063: INFO: Pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027380214s Sep 14 12:57:40.067: INFO: Pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032049323s STEP: Saw pod success Sep 14 12:57:40.067: INFO: Pod "pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa" satisfied condition "Succeeded or Failed" Sep 14 12:57:40.071: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa container projected-secret-volume-test: STEP: delete the pod Sep 14 12:57:40.175: INFO: Waiting for pod pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa to disappear Sep 14 12:57:40.184: INFO: Pod pod-projected-secrets-02f71b0a-c6db-4371-8830-10509b10a0fa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:40.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6318" for this suite. • [SLOW TEST:6.315 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":2988,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:40.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 14 12:57:40.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6478' Sep 14 12:57:45.044: INFO: stderr: "" Sep 14 12:57:45.044: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 14 12:57:45.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6478' Sep 14 12:57:49.185: INFO: stderr: "" Sep 14 12:57:49.185: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:49.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6478" for this suite. • [SLOW TEST:8.991 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":200,"skipped":3003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:49.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-bf428c2e-5cfc-41f1-b90b-af628e0b90c6 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:49.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2872" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":201,"skipped":3044,"failed":0} ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:49.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:57:49.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-501" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":202,"skipped":3044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:57:49.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1389 STEP: creating service affinity-nodeport-transition in namespace services-1389 STEP: creating replication controller affinity-nodeport-transition in namespace services-1389 I0914 12:57:49.555663 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1389, replica count: 3 I0914 12:57:52.606215 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 12:57:55.606521 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 12:57:56.241: INFO: Creating new exec pod Sep 14 12:58:01.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 14 12:58:01.516: INFO: stderr: "I0914 12:58:01.428313 2150 log.go:181] (0xc00036e000) (0xc000e06000) Create stream\nI0914 12:58:01.428380 2150 log.go:181] (0xc00036e000) (0xc000e06000) Stream added, broadcasting: 1\nI0914 12:58:01.430632 2150 log.go:181] (0xc00036e000) Reply frame received for 1\nI0914 12:58:01.430680 2150 log.go:181] (0xc00036e000) (0xc000439e00) Create stream\nI0914 12:58:01.430694 2150 log.go:181] (0xc00036e000) (0xc000439e00) Stream added, broadcasting: 3\nI0914 12:58:01.431608 2150 log.go:181] (0xc00036e000) Reply frame received for 3\nI0914 12:58:01.431655 2150 log.go:181] (0xc00036e000) (0xc000e060a0) Create stream\nI0914 12:58:01.431668 2150 log.go:181] (0xc00036e000) (0xc000e060a0) Stream added, broadcasting: 5\nI0914 12:58:01.432660 2150 log.go:181] (0xc00036e000) Reply frame received for 5\nI0914 12:58:01.508413 2150 log.go:181] (0xc00036e000) Data frame received for 5\nI0914 12:58:01.508433 2150 log.go:181] (0xc000e060a0) (5) Data frame handling\nI0914 12:58:01.508442 2150 log.go:181] (0xc000e060a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0914 12:58:01.508929 2150 log.go:181] (0xc00036e000) Data frame received for 5\nI0914 12:58:01.508941 2150 log.go:181] (0xc000e060a0) (5) Data frame handling\nI0914 12:58:01.508948 2150 log.go:181] (0xc000e060a0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0914 12:58:01.509169 2150 log.go:181] (0xc00036e000) Data frame received for 3\nI0914 12:58:01.509183 2150 log.go:181] (0xc000439e00) (3) Data frame handling\nI0914 12:58:01.509348 2150 log.go:181] (0xc00036e000) Data frame received for 5\nI0914 12:58:01.509376 2150 log.go:181] (0xc000e060a0) (5) Data frame handling\nI0914 12:58:01.511445 2150 log.go:181] (0xc00036e000) Data frame received for 1\nI0914 12:58:01.511465 2150 log.go:181] (0xc000e06000) (1) Data frame handling\nI0914 12:58:01.511475 2150 log.go:181] (0xc000e06000) (1) Data frame sent\nI0914 12:58:01.511484 2150 log.go:181] (0xc00036e000) (0xc000e06000) Stream removed, broadcasting: 1\nI0914 12:58:01.511499 2150 log.go:181] (0xc00036e000) Go away received\nI0914 12:58:01.511993 2150 log.go:181] (0xc00036e000) (0xc000e06000) Stream removed, broadcasting: 1\nI0914 12:58:01.512023 2150 log.go:181] (0xc00036e000) (0xc000439e00) Stream removed, broadcasting: 3\nI0914 12:58:01.512040 2150 log.go:181] (0xc00036e000) (0xc000e060a0) Stream removed, broadcasting: 5\n" Sep 14 12:58:01.516: INFO: stdout: "" Sep 14 12:58:01.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c nc -zv -t -w 2 10.111.54.64 80' Sep 14 12:58:01.722: INFO: stderr: "I0914 12:58:01.656497 2169 log.go:181] (0xc000e23130) (0xc000a1c8c0) Create stream\nI0914 12:58:01.656549 2169 log.go:181] (0xc000e23130) (0xc000a1c8c0) Stream added, broadcasting: 1\nI0914 12:58:01.661671 2169 log.go:181] (0xc000e23130) Reply frame received for 1\nI0914 12:58:01.661697 2169 log.go:181] (0xc000e23130) (0xc000ccc000) Create stream\nI0914 12:58:01.661703 2169 log.go:181] (0xc000e23130) (0xc000ccc000) Stream added, broadcasting: 3\nI0914 12:58:01.662610 2169 log.go:181] (0xc000e23130) Reply frame received for 3\nI0914 12:58:01.662663 2169 log.go:181] (0xc000e23130) (0xc000a1c000) Create stream\nI0914 12:58:01.662693 2169 log.go:181] (0xc000e23130) (0xc000a1c000) Stream added, broadcasting: 5\nI0914 12:58:01.663677 2169 log.go:181] (0xc000e23130) Reply frame received for 5\nI0914 12:58:01.715159 2169 log.go:181] (0xc000e23130) Data frame received for 3\nI0914 12:58:01.715190 2169 log.go:181] (0xc000ccc000) (3) Data frame handling\nI0914 12:58:01.715365 2169 log.go:181] (0xc000e23130) Data frame received for 5\nI0914 12:58:01.715393 2169 log.go:181] (0xc000a1c000) (5) Data frame handling\nI0914 12:58:01.715416 2169 log.go:181] (0xc000a1c000) (5) Data frame sent\nI0914 12:58:01.715428 2169 log.go:181] (0xc000e23130) Data frame received for 5\nI0914 12:58:01.715437 2169 log.go:181] (0xc000a1c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.54.64 80\nConnection to 10.111.54.64 80 port [tcp/http] succeeded!\nI0914 12:58:01.717187 2169 log.go:181] (0xc000e23130) Data frame received for 1\nI0914 12:58:01.717229 2169 log.go:181] (0xc000a1c8c0) (1) Data frame handling\nI0914 12:58:01.717250 2169 log.go:181] (0xc000a1c8c0) (1) Data frame sent\nI0914 12:58:01.717272 2169 log.go:181] (0xc000e23130) (0xc000a1c8c0) Stream removed, broadcasting: 1\nI0914 12:58:01.717290 2169 log.go:181] (0xc000e23130) Go away received\nI0914 12:58:01.717789 2169 log.go:181] (0xc000e23130) (0xc000a1c8c0) Stream removed, broadcasting: 1\nI0914 12:58:01.717811 2169 log.go:181] (0xc000e23130) (0xc000ccc000) Stream removed, broadcasting: 3\nI0914 12:58:01.717822 2169 log.go:181] (0xc000e23130) (0xc000a1c000) Stream removed, broadcasting: 5\n" Sep 14 12:58:01.722: INFO: stdout: "" Sep 14 12:58:01.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31763' Sep 14 12:58:01.945: INFO: stderr: "I0914 12:58:01.861835 2187 log.go:181] (0xc0005d0e70) (0xc0005c8640) Create stream\nI0914 12:58:01.861885 2187 log.go:181] (0xc0005d0e70) (0xc0005c8640) Stream added, broadcasting: 1\nI0914 12:58:01.866194 2187 log.go:181] (0xc0005d0e70) Reply frame received for 1\nI0914 12:58:01.866232 2187 log.go:181] (0xc0005d0e70) (0xc0005c8000) Create stream\nI0914 12:58:01.866243 2187 log.go:181] (0xc0005d0e70) (0xc0005c8000) Stream added, broadcasting: 3\nI0914 12:58:01.867095 2187 log.go:181] (0xc0005d0e70) Reply frame received for 3\nI0914 12:58:01.867121 2187 log.go:181] (0xc0005d0e70) (0xc0005c80a0) Create stream\nI0914 12:58:01.867129 2187 log.go:181] (0xc0005d0e70) (0xc0005c80a0) Stream added, broadcasting: 5\nI0914 12:58:01.867896 2187 log.go:181] (0xc0005d0e70) Reply frame received for 5\nI0914 12:58:01.938293 2187 log.go:181] (0xc0005d0e70) Data frame received for 5\nI0914 12:58:01.938315 2187 log.go:181] (0xc0005c80a0) (5) Data frame handling\nI0914 12:58:01.938330 2187 log.go:181] (0xc0005c80a0) (5) Data frame sent\nI0914 12:58:01.938336 2187 log.go:181] (0xc0005d0e70) Data frame received for 5\nI0914 12:58:01.938340 2187 log.go:181] (0xc0005c80a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31763\nConnection to 172.18.0.15 31763 port [tcp/31763] succeeded!\nI0914 12:58:01.938363 2187 log.go:181] (0xc0005c80a0) (5) Data frame sent\nI0914 12:58:01.938878 2187 log.go:181] (0xc0005d0e70) Data frame received for 5\nI0914 12:58:01.938905 2187 log.go:181] (0xc0005c80a0) (5) Data frame handling\nI0914 12:58:01.938954 2187 log.go:181] (0xc0005d0e70) Data frame received for 3\nI0914 12:58:01.939002 2187 log.go:181] (0xc0005c8000) (3) Data frame handling\nI0914 12:58:01.940806 2187 log.go:181] (0xc0005d0e70) Data frame received for 1\nI0914 12:58:01.940831 2187 log.go:181] (0xc0005c8640) (1) Data frame handling\nI0914 12:58:01.940845 2187 log.go:181] (0xc0005c8640) (1) Data frame sent\nI0914 12:58:01.940868 2187 log.go:181] (0xc0005d0e70) (0xc0005c8640) Stream removed, broadcasting: 1\nI0914 12:58:01.940914 2187 log.go:181] (0xc0005d0e70) Go away received\nI0914 12:58:01.941409 2187 log.go:181] (0xc0005d0e70) (0xc0005c8640) Stream removed, broadcasting: 1\nI0914 12:58:01.941434 2187 log.go:181] (0xc0005d0e70) (0xc0005c8000) Stream removed, broadcasting: 3\nI0914 12:58:01.941446 2187 log.go:181] (0xc0005d0e70) (0xc0005c80a0) Stream removed, broadcasting: 5\n" Sep 14 12:58:01.945: INFO: stdout: "" Sep 14 12:58:01.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31763' Sep 14 12:58:02.143: INFO: stderr: "I0914 12:58:02.077104 2206 log.go:181] (0xc000e28dc0) (0xc0001a30e0) Create stream\nI0914 12:58:02.077188 2206 log.go:181] (0xc000e28dc0) (0xc0001a30e0) Stream added, broadcasting: 1\nI0914 12:58:02.082659 2206 log.go:181] (0xc000e28dc0) Reply frame received for 1\nI0914 12:58:02.082711 2206 log.go:181] (0xc000e28dc0) (0xc0001a3ea0) Create stream\nI0914 12:58:02.082725 2206 log.go:181] (0xc000e28dc0) (0xc0001a3ea0) Stream added, broadcasting: 3\nI0914 12:58:02.083684 2206 log.go:181] (0xc000e28dc0) Reply frame received for 3\nI0914 12:58:02.083746 2206 log.go:181] (0xc000e28dc0) (0xc0005b0000) Create stream\nI0914 12:58:02.083779 2206 log.go:181] (0xc000e28dc0) (0xc0005b0000) Stream added, broadcasting: 5\nI0914 12:58:02.084830 2206 log.go:181] (0xc000e28dc0) Reply frame received for 5\nI0914 12:58:02.137610 2206 log.go:181] (0xc000e28dc0) Data frame received for 5\nI0914 12:58:02.137648 2206 log.go:181] (0xc0005b0000) (5) Data frame handling\nI0914 12:58:02.137666 2206 log.go:181] (0xc0005b0000) (5) Data frame sent\nI0914 12:58:02.137673 2206 log.go:181] (0xc000e28dc0) Data frame received for 5\nI0914 12:58:02.137680 2206 log.go:181] (0xc0005b0000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 31763\nConnection to 172.18.0.16 31763 port [tcp/31763] succeeded!\nI0914 12:58:02.137737 2206 log.go:181] (0xc0005b0000) (5) Data frame sent\nI0914 12:58:02.138039 2206 log.go:181] (0xc000e28dc0) Data frame received for 5\nI0914 12:58:02.138065 2206 log.go:181] (0xc0005b0000) (5) Data frame handling\nI0914 12:58:02.138129 2206 log.go:181] (0xc000e28dc0) Data frame received for 3\nI0914 12:58:02.138158 2206 log.go:181] (0xc0001a3ea0) (3) Data frame handling\nI0914 12:58:02.139642 2206 log.go:181] (0xc000e28dc0) Data frame received for 1\nI0914 12:58:02.139695 2206 log.go:181] (0xc0001a30e0) (1) Data frame handling\nI0914 12:58:02.139717 2206 log.go:181] (0xc0001a30e0) (1) Data frame sent\nI0914 12:58:02.139728 2206 log.go:181] (0xc000e28dc0) (0xc0001a30e0) Stream removed, broadcasting: 1\nI0914 12:58:02.139739 2206 log.go:181] (0xc000e28dc0) Go away received\nI0914 12:58:02.140257 2206 log.go:181] (0xc000e28dc0) (0xc0001a30e0) Stream removed, broadcasting: 1\nI0914 12:58:02.140287 2206 log.go:181] (0xc000e28dc0) (0xc0001a3ea0) Stream removed, broadcasting: 3\nI0914 12:58:02.140303 2206 log.go:181] (0xc000e28dc0) (0xc0005b0000) Stream removed, broadcasting: 5\n" Sep 14 12:58:02.143: INFO: stdout: "" Sep 14 12:58:02.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31763/ ; done' Sep 14 12:58:02.460: INFO: stderr: "I0914 12:58:02.287644 2224 log.go:181] (0xc000e0d550) (0xc000e04960) Create stream\nI0914 12:58:02.287704 2224 log.go:181] (0xc000e0d550) (0xc000e04960) Stream added, broadcasting: 1\nI0914 12:58:02.293897 2224 log.go:181] (0xc000e0d550) Reply frame received for 1\nI0914 12:58:02.293945 2224 log.go:181] (0xc000e0d550) (0xc000e04000) Create stream\nI0914 12:58:02.293961 2224 log.go:181] (0xc000e0d550) (0xc000e04000) Stream added, broadcasting: 3\nI0914 12:58:02.294753 2224 log.go:181] (0xc000e0d550) Reply frame received for 3\nI0914 12:58:02.294786 2224 log.go:181] (0xc000e0d550) (0xc00090c320) Create stream\nI0914 12:58:02.294793 2224 log.go:181] (0xc000e0d550) (0xc00090c320) Stream added, broadcasting: 5\nI0914 12:58:02.295374 2224 log.go:181] (0xc000e0d550) Reply frame received for 5\nI0914 12:58:02.350903 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.350926 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.350937 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.350964 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.350991 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.351015 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.354646 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.354663 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.354672 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.355485 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.355513 2224 log.go:181] (0xc00090c320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.355552 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.355593 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.355623 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.355664 2224 log.go:181] (0xc00090c320) (5) Data frame sent\nI0914 12:58:02.363222 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.363251 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.363270 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.363983 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.364028 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.364048 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.364072 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.364082 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.364101 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.371459 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.371479 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.371490 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.372575 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.372596 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.372619 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.372639 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.372662 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.372679 2224 log.go:181] (0xc00090c320) (5) Data frame sent\nI0914 12:58:02.372693 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.372705 2224 log.go:181] (0xc00090c320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.372740 2224 log.go:181] (0xc00090c320) (5) Data frame sent\nI0914 12:58:02.378736 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.378768 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.378791 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.379597 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.379614 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.379626 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.379668 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.379690 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.379701 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.385288 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.385311 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.385329 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.385993 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.386101 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.386139 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.386161 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.386176 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.386195 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.389538 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.389557 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.389567 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.390232 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.390261 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.390286 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.390306 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.390318 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.390335 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.394046 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.394061 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.394071 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.394754 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.394775 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.394790 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.394891 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.394908 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.394922 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.400565 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.400580 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.400592 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.401146 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.401173 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.401184 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.401199 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.401208 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.401216 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.409299 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.409322 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.409338 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.409378 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.409394 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.409401 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.409408 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.409414 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.409420 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.415107 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.415132 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.415160 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.415792 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.415833 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.415846 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.415861 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.415875 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.415886 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.421806 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.421837 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.421883 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.422296 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.422328 2224 log.go:181] (0xc00090c320) (5) Data frame handling\n+ echo\n+ curl -qI0914 12:58:02.422354 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.422380 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.422393 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.422422 2224 log.go:181] (0xc00090c320) (5) Data frame sent\nI0914 12:58:02.422437 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.422449 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.422465 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.427264 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.427280 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.427289 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.427773 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.427787 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.427797 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.427815 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.427831 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.427841 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.433725 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.433751 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.433775 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.434221 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.434243 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.434254 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.434267 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.434273 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.434279 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.439907 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.439927 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.439941 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.440709 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.440723 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.440741 2224 log.go:181] (0xc000e0d550) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.440777 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.440801 2224 log.go:181] (0xc00090c320) (5) Data frame sent\nI0914 12:58:02.440820 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.446507 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.446534 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.446560 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.446917 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.446932 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.446942 2224 log.go:181] (0xc00090c320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.446958 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.446981 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.446996 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.453470 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.453495 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.453523 2224 log.go:181] (0xc000e04000) (3) Data frame sent\nI0914 12:58:02.454193 2224 log.go:181] (0xc000e0d550) Data frame received for 3\nI0914 12:58:02.454218 2224 log.go:181] (0xc000e04000) (3) Data frame handling\nI0914 12:58:02.454652 2224 log.go:181] (0xc000e0d550) Data frame received for 5\nI0914 12:58:02.454664 2224 log.go:181] (0xc00090c320) (5) Data frame handling\nI0914 12:58:02.456910 2224 log.go:181] (0xc000e0d550) Data frame received for 1\nI0914 12:58:02.456938 2224 log.go:181] (0xc000e04960) (1) Data frame handling\nI0914 12:58:02.456969 2224 log.go:181] (0xc000e04960) (1) Data frame sent\nI0914 12:58:02.456993 2224 log.go:181] (0xc000e0d550) (0xc000e04960) Stream removed, broadcasting: 1\nI0914 12:58:02.457025 2224 log.go:181] (0xc000e0d550) Go away received\nI0914 12:58:02.457399 2224 log.go:181] (0xc000e0d550) (0xc000e04960) Stream removed, broadcasting: 1\nI0914 12:58:02.457417 2224 log.go:181] (0xc000e0d550) (0xc000e04000) Stream removed, broadcasting: 3\nI0914 12:58:02.457425 2224 log.go:181] (0xc000e0d550) (0xc00090c320) Stream removed, broadcasting: 5\n" Sep 14 12:58:02.461: INFO: stdout: "\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-9jtn8\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-9jtn8\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-tlkb6\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-9jtn8\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-9jtn8\naffinity-nodeport-transition-tlkb6" Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-9jtn8 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-9jtn8 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-9jtn8 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-9jtn8 Sep 14 12:58:02.461: INFO: Received response from host: affinity-nodeport-transition-tlkb6 Sep 14 12:58:02.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1389 execpod-affinitywf82k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31763/ ; done' Sep 14 12:58:02.783: INFO: stderr: "I0914 12:58:02.614407 2242 log.go:181] (0xc000c3c000) (0xc000c5a280) Create stream\nI0914 12:58:02.614448 2242 log.go:181] (0xc000c3c000) (0xc000c5a280) Stream added, broadcasting: 1\nI0914 12:58:02.616121 2242 log.go:181] (0xc000c3c000) Reply frame received for 1\nI0914 12:58:02.616252 2242 log.go:181] (0xc000c3c000) (0xc0008a2000) Create stream\nI0914 12:58:02.616270 2242 log.go:181] (0xc000c3c000) (0xc0008a2000) Stream added, broadcasting: 3\nI0914 12:58:02.617122 2242 log.go:181] (0xc000c3c000) Reply frame received for 3\nI0914 12:58:02.617160 2242 log.go:181] (0xc000c3c000) (0xc000a377c0) Create stream\nI0914 12:58:02.617170 2242 log.go:181] (0xc000c3c000) (0xc000a377c0) Stream added, broadcasting: 5\nI0914 12:58:02.617998 2242 log.go:181] (0xc000c3c000) Reply frame received for 5\nI0914 12:58:02.667425 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.667468 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.667491 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.667518 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.667541 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.667571 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.674357 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.674373 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.674384 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.675187 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.675206 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.675218 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.675226 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.675235 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.675241 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.681934 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.681947 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.681953 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.682564 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.682582 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.682599 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.682619 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.682626 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.682637 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.687497 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.687516 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.687532 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.688589 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.688618 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.688631 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.688645 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.688652 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.688660 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.694864 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.694895 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.694946 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.695697 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.695720 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.695731 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.695755 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.695779 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.695802 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.699863 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.699901 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.699935 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.700544 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.700570 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.700588 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.700611 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.700621 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.700634 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.705360 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.705401 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.705430 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.705719 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.705739 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.705758 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.705915 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.705937 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.705954 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.711213 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.711244 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.711272 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.712053 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.712069 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.712079 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.712105 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.712113 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.712123 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.719029 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.719042 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.719049 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.719711 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.719722 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.719728 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.719744 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.719750 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.719760 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.727265 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.727292 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.727310 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.728188 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.728215 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.728225 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.728255 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.728283 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.728315 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\nI0914 12:58:02.728339 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.728357 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.728396 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\nI0914 12:58:02.734498 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.734511 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.734517 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.735204 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.735229 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.735237 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.735272 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.735322 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.735359 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.742142 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.742155 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.742161 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.743013 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.743033 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.743042 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.743090 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.743112 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.743126 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.749681 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.749694 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.749701 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.750607 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.750619 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.750640 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.750672 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.750684 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.750699 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.754643 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.754661 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.754669 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.755511 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.755523 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.755547 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.755579 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.755600 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.755618 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.763058 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.763078 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.763090 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.763959 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.764059 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.764094 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.764115 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.764225 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.764330 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.769671 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.769700 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.769729 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.770162 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.770180 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.770203 2242 log.go:181] (0xc000a377c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31763/\nI0914 12:58:02.770233 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.770261 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.770276 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.776655 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.776684 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.776704 2242 log.go:181] (0xc0008a2000) (3) Data frame sent\nI0914 12:58:02.777743 2242 log.go:181] (0xc000c3c000) Data frame received for 5\nI0914 12:58:02.777768 2242 log.go:181] (0xc000a377c0) (5) Data frame handling\nI0914 12:58:02.777835 2242 log.go:181] (0xc000c3c000) Data frame received for 3\nI0914 12:58:02.777852 2242 log.go:181] (0xc0008a2000) (3) Data frame handling\nI0914 12:58:02.779808 2242 log.go:181] (0xc000c3c000) Data frame received for 1\nI0914 12:58:02.779844 2242 log.go:181] (0xc000c5a280) (1) Data frame handling\nI0914 12:58:02.779863 2242 log.go:181] (0xc000c5a280) (1) Data frame sent\nI0914 12:58:02.779877 2242 log.go:181] (0xc000c3c000) (0xc000c5a280) Stream removed, broadcasting: 1\nI0914 12:58:02.779890 2242 log.go:181] (0xc000c3c000) Go away received\nI0914 12:58:02.780321 2242 log.go:181] (0xc000c3c000) (0xc000c5a280) Stream removed, broadcasting: 1\nI0914 12:58:02.780340 2242 log.go:181] (0xc000c3c000) (0xc0008a2000) Stream removed, broadcasting: 3\nI0914 12:58:02.780348 2242 log.go:181] (0xc000c3c000) (0xc000a377c0) Stream removed, broadcasting: 5\n" Sep 14 12:58:02.783: INFO: stdout: "\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm\naffinity-nodeport-transition-tk5qm" Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Received response from host: affinity-nodeport-transition-tk5qm Sep 14 12:58:02.783: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1389, will wait for the garbage collector to delete the pods Sep 14 12:58:02.907: INFO: Deleting ReplicationController affinity-nodeport-transition took: 18.198897ms Sep 14 12:58:03.308: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.233738ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:15.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1389" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.154 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":203,"skipped":3067,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:15.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:20.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2457" for this suite. • [SLOW TEST:5.213 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":204,"skipped":3080,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:20.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:58:21.072: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"81ec9235-41be-47c1-9215-52004f76656a", Controller:(*bool)(0xc002b4dfa2), BlockOwnerDeletion:(*bool)(0xc002b4dfa3)}} Sep 14 12:58:21.131: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1017af01-617d-4768-b383-4c93e2ad31f9", Controller:(*bool)(0xc003ad81da), BlockOwnerDeletion:(*bool)(0xc003ad81db)}} Sep 14 12:58:21.187: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ed3cc416-6cec-49d2-8a25-09d58932d931", Controller:(*bool)(0xc003ad83da), BlockOwnerDeletion:(*bool)(0xc003ad83db)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:26.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1875" for this suite. • [SLOW TEST:5.471 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":205,"skipped":3082,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:26.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 14 12:58:34.434: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:34.452: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:36.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:36.457: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:38.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:38.462: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:40.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:40.457: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:42.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:42.458: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:44.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:44.457: INFO: Pod pod-with-poststart-http-hook still exists Sep 14 12:58:46.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 14 12:58:46.456: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:46.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5294" for this suite. • [SLOW TEST:20.208 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3088,"failed":0} [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:46.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:46.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-668" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":207,"skipped":3088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:46.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 12:58:47.276: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 12:58:49.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685127, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685127, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685127, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685127, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 12:58:52.325: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:53.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6089" for this suite. STEP: Destroying namespace "webhook-6089-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.678 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":208,"skipped":3149,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:53.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-21a63561-6ce7-445c-8a61-4c7959dbd022 STEP: Creating a pod to test consume secrets Sep 14 12:58:53.391: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28" in namespace "projected-7674" to be "Succeeded or Failed" Sep 14 12:58:53.411: INFO: Pod "pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28": Phase="Pending", Reason="", readiness=false. Elapsed: 19.996109ms Sep 14 12:58:55.415: INFO: Pod "pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02403053s Sep 14 12:58:57.420: INFO: Pod "pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028955703s STEP: Saw pod success Sep 14 12:58:57.420: INFO: Pod "pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28" satisfied condition "Succeeded or Failed" Sep 14 12:58:57.424: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28 container projected-secret-volume-test: STEP: delete the pod Sep 14 12:58:57.461: INFO: Waiting for pod pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28 to disappear Sep 14 12:58:57.473: INFO: Pod pod-projected-secrets-c209a92d-bcfc-4be5-ade2-de6ed7aa4e28 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:58:57.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7674" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3169,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:58:57.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-7f542d9e-787d-43d8-ac45-e2d3892b2cf0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7f542d9e-787d-43d8-ac45-e2d3892b2cf0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:59:03.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6898" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3183,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:59:03.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:59:07.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7807" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3194,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:59:07.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 12:59:08.055: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 14 12:59:11.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 create -f -' Sep 14 12:59:14.621: INFO: stderr: "" Sep 14 12:59:14.621: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 14 12:59:14.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 delete e2e-test-crd-publish-openapi-2388-crds test-foo' Sep 14 12:59:14.740: INFO: stderr: "" Sep 14 12:59:14.740: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 14 12:59:14.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 apply -f -' Sep 14 12:59:15.032: INFO: stderr: "" Sep 14 12:59:15.032: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 14 12:59:15.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 delete e2e-test-crd-publish-openapi-2388-crds test-foo' Sep 14 12:59:15.155: INFO: stderr: "" Sep 14 12:59:15.155: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 14 12:59:15.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 create -f -' Sep 14 12:59:15.457: INFO: rc: 1 Sep 14 12:59:15.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 apply -f -' Sep 14 12:59:15.730: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 14 12:59:15.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 create -f -' Sep 14 12:59:15.987: INFO: rc: 1 Sep 14 12:59:15.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5545 apply -f -' Sep 14 12:59:16.265: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 14 12:59:16.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds' Sep 14 12:59:16.548: INFO: stderr: "" Sep 14 12:59:16.548: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2388-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 14 12:59:16.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds.metadata' Sep 14 12:59:16.828: INFO: stderr: "" Sep 14 12:59:16.828: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2388-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 14 12:59:16.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds.spec' Sep 14 12:59:17.127: INFO: stderr: "" Sep 14 12:59:17.127: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2388-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 14 12:59:17.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds.spec.bars' Sep 14 12:59:17.416: INFO: stderr: "" Sep 14 12:59:17.416: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2388-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 14 12:59:17.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds.spec.bars2' Sep 14 12:59:17.673: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 12:59:20.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5545" for this suite. • [SLOW TEST:12.640 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":212,"skipped":3209,"failed":0} [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 12:59:20.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 14 12:59:21.407: INFO: Pod name wrapped-volume-race-b2a5e5b3-b525-4716-abb5-29ff71b3758b: Found 0 pods out of 5 Sep 14 12:59:28.814: INFO: Pod name wrapped-volume-race-b2a5e5b3-b525-4716-abb5-29ff71b3758b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b2a5e5b3-b525-4716-abb5-29ff71b3758b in namespace emptydir-wrapper-7474, will wait for the garbage collector to delete the pods Sep 14 12:59:40.956: INFO: Deleting ReplicationController wrapped-volume-race-b2a5e5b3-b525-4716-abb5-29ff71b3758b took: 8.069254ms Sep 14 12:59:41.457: INFO: Terminating ReplicationController wrapped-volume-race-b2a5e5b3-b525-4716-abb5-29ff71b3758b pods took: 500.189974ms STEP: Creating RC which spawns configmap-volume pods Sep 14 12:59:55.981: INFO: Pod name wrapped-volume-race-62757e51-46f2-4044-b2d5-e4a00e2b1ec1: Found 0 pods out of 5 Sep 14 13:00:00.990: INFO: Pod name wrapped-volume-race-62757e51-46f2-4044-b2d5-e4a00e2b1ec1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-62757e51-46f2-4044-b2d5-e4a00e2b1ec1 in namespace emptydir-wrapper-7474, will wait for the garbage collector to delete the pods Sep 14 13:00:15.074: INFO: Deleting ReplicationController wrapped-volume-race-62757e51-46f2-4044-b2d5-e4a00e2b1ec1 took: 7.656874ms Sep 14 13:00:15.474: INFO: Terminating ReplicationController wrapped-volume-race-62757e51-46f2-4044-b2d5-e4a00e2b1ec1 pods took: 400.279674ms STEP: Creating RC which spawns configmap-volume pods Sep 14 13:00:26.387: INFO: Pod name wrapped-volume-race-4b6a074c-ea74-42cd-9da1-f56a885327f0: Found 0 pods out of 5 Sep 14 13:00:31.395: INFO: Pod name wrapped-volume-race-4b6a074c-ea74-42cd-9da1-f56a885327f0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4b6a074c-ea74-42cd-9da1-f56a885327f0 in namespace emptydir-wrapper-7474, will wait for the garbage collector to delete the pods Sep 14 13:00:45.515: INFO: Deleting ReplicationController wrapped-volume-race-4b6a074c-ea74-42cd-9da1-f56a885327f0 took: 41.08956ms Sep 14 13:00:45.915: INFO: Terminating ReplicationController wrapped-volume-race-4b6a074c-ea74-42cd-9da1-f56a885327f0 pods took: 400.225796ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:00:56.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7474" for this suite. • [SLOW TEST:96.130 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":213,"skipped":3209,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:00:56.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 14 13:00:56.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276533 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:00:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 13:00:56.861: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276534 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:00:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 13:00:56.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276535 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:00:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 14 13:01:06.911: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276751 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:01:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 13:01:06.911: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276752 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:01:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 14 13:01:06.912: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2958 /api/v1/namespaces/watch-2958/configmaps/e2e-watch-test-label-changed 94b23279-0ee7-4be5-859e-0d81e501fa37 276753 0 2020-09-14 13:00:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-14 13:01:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:01:06.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2958" for this suite. • [SLOW TEST:10.170 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":214,"skipped":3217,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:01:06.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-ef140545-1b56-4893-aa9e-8ead1a8dabcf STEP: Creating a pod to test consume secrets Sep 14 13:01:06.986: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e" in namespace "projected-3885" to be "Succeeded or Failed" Sep 14 13:01:07.001: INFO: Pod "pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.442303ms Sep 14 13:01:09.004: INFO: Pod "pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018110165s Sep 14 13:01:11.031: INFO: Pod "pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045358449s STEP: Saw pod success Sep 14 13:01:11.032: INFO: Pod "pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e" satisfied condition "Succeeded or Failed" Sep 14 13:01:11.035: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e container projected-secret-volume-test: STEP: delete the pod Sep 14 13:01:11.153: INFO: Waiting for pod pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e to disappear Sep 14 13:01:11.161: INFO: Pod pod-projected-secrets-263aba29-676d-4ac7-9cbf-5a06f17f299e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:01:11.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3885" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":215,"skipped":3222,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:01:11.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2525 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2525;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2525 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2525;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2525.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2525.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2525.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2525.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.70.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.70.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.70.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.70.203_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2525 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2525;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2525 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2525;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2525.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2525.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2525.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2525.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.70.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.70.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.70.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.70.203_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:01:17.388: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.391: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.394: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.397: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.400: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.403: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.409: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.429: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.432: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.435: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.440: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.447: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.450: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:17.467: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:22.472: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.476: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.480: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.483: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.489: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.492: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.494: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.516: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.519: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.522: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.526: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.529: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.532: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.535: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.539: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:22.557: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:27.473: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.477: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.481: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.484: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.491: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.493: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.496: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.515: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.517: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.520: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.523: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.526: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.529: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.532: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.535: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:27.553: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:32.595: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.598: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.651: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.674: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.751: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.755: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.773: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.775: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.780: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.785: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.790: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.793: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:32.809: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:37.481: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.483: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.485: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.487: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.489: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.491: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.494: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.496: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.514: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.517: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.519: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.525: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.527: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.530: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.532: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:37.551: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:42.472: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.476: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.481: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.485: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.492: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.495: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.517: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.520: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.522: INFO: Unable to read jessie_udp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525 from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.528: INFO: Unable to read jessie_udp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.531: INFO: Unable to read jessie_tcp@dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.534: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.537: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc from pod dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30: the server could not find the requested resource (get pods dns-test-ec54227a-894f-4d29-96ac-44f53de16d30) Sep 14 13:01:42.557: INFO: Lookups using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2525 wheezy_tcp@dns-test-service.dns-2525 wheezy_udp@dns-test-service.dns-2525.svc wheezy_tcp@dns-test-service.dns-2525.svc wheezy_udp@_http._tcp.dns-test-service.dns-2525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2525 jessie_tcp@dns-test-service.dns-2525 jessie_udp@dns-test-service.dns-2525.svc jessie_tcp@dns-test-service.dns-2525.svc jessie_udp@_http._tcp.dns-test-service.dns-2525.svc jessie_tcp@_http._tcp.dns-test-service.dns-2525.svc] Sep 14 13:01:47.557: INFO: DNS probes using dns-2525/dns-test-ec54227a-894f-4d29-96ac-44f53de16d30 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:01:48.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2525" for this suite. • [SLOW TEST:37.032 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":216,"skipped":3235,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:01:48.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1276 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1276 STEP: Deleting pre-stop pod Sep 14 13:02:03.387: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:02:03.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1276" for this suite. • [SLOW TEST:15.204 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":217,"skipped":3242,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:02:03.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 14 13:02:03.519: INFO: Created pod &Pod{ObjectMeta:{dns-8164 dns-8164 /api/v1/namespaces/dns-8164/pods/dns-8164 8185cffe-1ebd-461e-9261-7f1c7f0eb63c 277050 0 2020-09-14 13:02:03 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-14 13:02:03 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wmzwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wmzwj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wmzwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 14 13:02:03.651: INFO: The status of Pod dns-8164 is Pending, waiting for it to be Running (with Ready = true) Sep 14 13:02:05.656: INFO: The status of Pod dns-8164 is Pending, waiting for it to be Running (with Ready = true) Sep 14 13:02:07.655: INFO: The status of Pod dns-8164 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 14 13:02:07.655: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8164 PodName:dns-8164 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:02:07.655: INFO: >>> kubeConfig: /root/.kube/config I0914 13:02:07.689119 7 log.go:181] (0xc0009548f0) (0xc001b8c0a0) Create stream I0914 13:02:07.689147 7 log.go:181] (0xc0009548f0) (0xc001b8c0a0) Stream added, broadcasting: 1 I0914 13:02:07.690712 7 log.go:181] (0xc0009548f0) Reply frame received for 1 I0914 13:02:07.690746 7 log.go:181] (0xc0009548f0) (0xc003ddcaa0) Create stream I0914 13:02:07.690753 7 log.go:181] (0xc0009548f0) (0xc003ddcaa0) Stream added, broadcasting: 3 I0914 13:02:07.691592 7 log.go:181] (0xc0009548f0) Reply frame received for 3 I0914 13:02:07.691624 7 log.go:181] (0xc0009548f0) (0xc001b8c140) Create stream I0914 13:02:07.691637 7 log.go:181] (0xc0009548f0) (0xc001b8c140) Stream added, broadcasting: 5 I0914 13:02:07.692504 7 log.go:181] (0xc0009548f0) Reply frame received for 5 I0914 13:02:07.764961 7 log.go:181] (0xc0009548f0) Data frame received for 3 I0914 13:02:07.764985 7 log.go:181] (0xc003ddcaa0) (3) Data frame handling I0914 13:02:07.765009 7 log.go:181] (0xc003ddcaa0) (3) Data frame sent I0914 13:02:07.766805 7 log.go:181] (0xc0009548f0) Data frame received for 5 I0914 13:02:07.766848 7 log.go:181] (0xc001b8c140) (5) Data frame handling I0914 13:02:07.766878 7 log.go:181] (0xc0009548f0) Data frame received for 3 I0914 13:02:07.766892 7 log.go:181] (0xc003ddcaa0) (3) Data frame handling I0914 13:02:07.768770 7 log.go:181] (0xc0009548f0) Data frame received for 1 I0914 13:02:07.768812 7 log.go:181] (0xc001b8c0a0) (1) Data frame handling I0914 13:02:07.768834 7 log.go:181] (0xc001b8c0a0) (1) Data frame sent I0914 13:02:07.768853 7 log.go:181] (0xc0009548f0) (0xc001b8c0a0) Stream removed, broadcasting: 1 I0914 13:02:07.768876 7 log.go:181] (0xc0009548f0) Go away received I0914 13:02:07.768982 7 log.go:181] (0xc0009548f0) (0xc001b8c0a0) Stream removed, broadcasting: 1 I0914 13:02:07.768999 7 log.go:181] (0xc0009548f0) (0xc003ddcaa0) Stream removed, broadcasting: 3 I0914 13:02:07.769006 7 log.go:181] (0xc0009548f0) (0xc001b8c140) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 14 13:02:07.769: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8164 PodName:dns-8164 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:02:07.769: INFO: >>> kubeConfig: /root/.kube/config I0914 13:02:07.803273 7 log.go:181] (0xc00346e580) (0xc003e61e00) Create stream I0914 13:02:07.803301 7 log.go:181] (0xc00346e580) (0xc003e61e00) Stream added, broadcasting: 1 I0914 13:02:07.805591 7 log.go:181] (0xc00346e580) Reply frame received for 1 I0914 13:02:07.805640 7 log.go:181] (0xc00346e580) (0xc003e61ea0) Create stream I0914 13:02:07.805656 7 log.go:181] (0xc00346e580) (0xc003e61ea0) Stream added, broadcasting: 3 I0914 13:02:07.806689 7 log.go:181] (0xc00346e580) Reply frame received for 3 I0914 13:02:07.806732 7 log.go:181] (0xc00346e580) (0xc003ee8c80) Create stream I0914 13:02:07.806747 7 log.go:181] (0xc00346e580) (0xc003ee8c80) Stream added, broadcasting: 5 I0914 13:02:07.807913 7 log.go:181] (0xc00346e580) Reply frame received for 5 I0914 13:02:07.878468 7 log.go:181] (0xc00346e580) Data frame received for 3 I0914 13:02:07.878513 7 log.go:181] (0xc003e61ea0) (3) Data frame handling I0914 13:02:07.878562 7 log.go:181] (0xc003e61ea0) (3) Data frame sent I0914 13:02:07.879302 7 log.go:181] (0xc00346e580) Data frame received for 3 I0914 13:02:07.879336 7 log.go:181] (0xc003e61ea0) (3) Data frame handling I0914 13:02:07.879556 7 log.go:181] (0xc00346e580) Data frame received for 5 I0914 13:02:07.879582 7 log.go:181] (0xc003ee8c80) (5) Data frame handling I0914 13:02:07.881065 7 log.go:181] (0xc00346e580) Data frame received for 1 I0914 13:02:07.881090 7 log.go:181] (0xc003e61e00) (1) Data frame handling I0914 13:02:07.881103 7 log.go:181] (0xc003e61e00) (1) Data frame sent I0914 13:02:07.881122 7 log.go:181] (0xc00346e580) (0xc003e61e00) Stream removed, broadcasting: 1 I0914 13:02:07.881144 7 log.go:181] (0xc00346e580) Go away received I0914 13:02:07.881284 7 log.go:181] (0xc00346e580) (0xc003e61e00) Stream removed, broadcasting: 1 I0914 13:02:07.881315 7 log.go:181] (0xc00346e580) (0xc003e61ea0) Stream removed, broadcasting: 3 I0914 13:02:07.881329 7 log.go:181] (0xc00346e580) (0xc003ee8c80) Stream removed, broadcasting: 5 Sep 14 13:02:07.881: INFO: Deleting pod dns-8164... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:02:07.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8164" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":218,"skipped":3242,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:02:07.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 14 13:02:07.979: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:02:25.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3352" for this suite. • [SLOW TEST:17.233 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":219,"skipped":3242,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:02:25.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:02:25.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18" in namespace "downward-api-8609" to be "Succeeded or Failed" Sep 14 13:02:25.277: INFO: Pod "downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18": Phase="Pending", Reason="", readiness=false. Elapsed: 42.220203ms Sep 14 13:02:27.293: INFO: Pod "downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057404432s Sep 14 13:02:29.297: INFO: Pod "downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061570101s STEP: Saw pod success Sep 14 13:02:29.297: INFO: Pod "downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18" satisfied condition "Succeeded or Failed" Sep 14 13:02:29.300: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18 container client-container: STEP: delete the pod Sep 14 13:02:29.363: INFO: Waiting for pod downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18 to disappear Sep 14 13:02:29.487: INFO: Pod downwardapi-volume-492e4cc3-06da-46e2-92a3-7fc2b94abc18 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:02:29.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8609" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:02:29.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-edff07ec-aede-445c-b700-c1f2c0ce2976 STEP: Creating secret with name secret-projected-all-test-volume-af4d2f78-81af-44ec-b0a8-8336032bd810 STEP: Creating a pod to test Check all projections for projected volume plugin Sep 14 13:02:29.619: INFO: Waiting up to 5m0s for pod "projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba" in namespace "projected-2778" to be "Succeeded or Failed" Sep 14 13:02:29.624: INFO: Pod "projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581655ms Sep 14 13:02:31.628: INFO: Pod "projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008546623s Sep 14 13:02:33.632: INFO: Pod "projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012773295s STEP: Saw pod success Sep 14 13:02:33.632: INFO: Pod "projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba" satisfied condition "Succeeded or Failed" Sep 14 13:02:33.634: INFO: Trying to get logs from node latest-worker2 pod projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba container projected-all-volume-test: STEP: delete the pod Sep 14 13:02:33.668: INFO: Waiting for pod projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba to disappear Sep 14 13:02:33.684: INFO: Pod projected-volume-25e81ab7-ad33-4ed6-917d-0846ac478fba no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:02:33.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2778" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3272,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:02:33.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-e2afecb5-cfa2-4331-b5de-bf1d4b7afd54 in namespace container-probe-2566 Sep 14 13:02:37.790: INFO: Started pod test-webserver-e2afecb5-cfa2-4331-b5de-bf1d4b7afd54 in namespace container-probe-2566 STEP: checking the pod's current state and verifying that restartCount is present Sep 14 13:02:37.793: INFO: Initial restart count of pod test-webserver-e2afecb5-cfa2-4331-b5de-bf1d4b7afd54 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:06:38.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2566" for this suite. • [SLOW TEST:245.265 seconds] [k8s.io] Probing container /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:06:38.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 14 13:06:39.070: INFO: Waiting up to 1m0s for all nodes to be ready Sep 14 13:07:39.101: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 14 13:07:39.120: INFO: Created pod: pod0-sched-preemption-low-priority Sep 14 13:07:39.186: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:07:59.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3712" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.433 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":223,"skipped":3320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:07:59.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7394 STEP: creating service affinity-nodeport in namespace services-7394 STEP: creating replication controller affinity-nodeport in namespace services-7394 I0914 13:07:59.690090 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-7394, replica count: 3 I0914 13:08:02.740555 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:08:05.740860 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:08:05.749: INFO: Creating new exec pod Sep 14 13:08:10.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7394 execpod-affinity2c58v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 14 13:08:11.042: INFO: stderr: "I0914 13:08:10.969028 2496 log.go:181] (0xc001015130) (0xc000f188c0) Create stream\nI0914 13:08:10.969090 2496 log.go:181] (0xc001015130) (0xc000f188c0) Stream added, broadcasting: 1\nI0914 13:08:10.974108 2496 log.go:181] (0xc001015130) Reply frame received for 1\nI0914 13:08:10.974153 2496 log.go:181] (0xc001015130) (0xc000f18000) Create stream\nI0914 13:08:10.974163 2496 log.go:181] (0xc001015130) (0xc000f18000) Stream added, broadcasting: 3\nI0914 13:08:10.975221 2496 log.go:181] (0xc001015130) Reply frame received for 3\nI0914 13:08:10.975255 2496 log.go:181] (0xc001015130) (0xc000bca0a0) Create stream\nI0914 13:08:10.975266 2496 log.go:181] (0xc001015130) (0xc000bca0a0) Stream added, broadcasting: 5\nI0914 13:08:10.976112 2496 log.go:181] (0xc001015130) Reply frame received for 5\nI0914 13:08:11.035546 2496 log.go:181] (0xc001015130) Data frame received for 5\nI0914 13:08:11.035576 2496 log.go:181] (0xc000bca0a0) (5) Data frame handling\nI0914 13:08:11.035585 2496 log.go:181] (0xc000bca0a0) (5) Data frame sent\nI0914 13:08:11.035659 2496 log.go:181] (0xc001015130) Data frame received for 5\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0914 13:08:11.035676 2496 log.go:181] (0xc000bca0a0) (5) Data frame handling\nI0914 13:08:11.035683 2496 log.go:181] (0xc000bca0a0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0914 13:08:11.035852 2496 log.go:181] (0xc001015130) Data frame received for 3\nI0914 13:08:11.035866 2496 log.go:181] (0xc000f18000) (3) Data frame handling\nI0914 13:08:11.036093 2496 log.go:181] (0xc001015130) Data frame received for 5\nI0914 13:08:11.036110 2496 log.go:181] (0xc000bca0a0) (5) Data frame handling\nI0914 13:08:11.037926 2496 log.go:181] (0xc001015130) Data frame received for 1\nI0914 13:08:11.037960 2496 log.go:181] (0xc000f188c0) (1) Data frame handling\nI0914 13:08:11.037985 2496 log.go:181] (0xc000f188c0) (1) Data frame sent\nI0914 13:08:11.038007 2496 log.go:181] (0xc001015130) (0xc000f188c0) Stream removed, broadcasting: 1\nI0914 13:08:11.038042 2496 log.go:181] (0xc001015130) Go away received\nI0914 13:08:11.038522 2496 log.go:181] (0xc001015130) (0xc000f188c0) Stream removed, broadcasting: 1\nI0914 13:08:11.038554 2496 log.go:181] (0xc001015130) (0xc000f18000) Stream removed, broadcasting: 3\nI0914 13:08:11.038567 2496 log.go:181] (0xc001015130) (0xc000bca0a0) Stream removed, broadcasting: 5\n" Sep 14 13:08:11.043: INFO: stdout: "" Sep 14 13:08:11.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7394 execpod-affinity2c58v -- /bin/sh -x -c nc -zv -t -w 2 10.97.44.251 80' Sep 14 13:08:11.258: INFO: stderr: "I0914 13:08:11.186152 2514 log.go:181] (0xc0008a9340) (0xc000992780) Create stream\nI0914 13:08:11.186211 2514 log.go:181] (0xc0008a9340) (0xc000992780) Stream added, broadcasting: 1\nI0914 13:08:11.192116 2514 log.go:181] (0xc0008a9340) Reply frame received for 1\nI0914 13:08:11.192261 2514 log.go:181] (0xc0008a9340) (0xc0006b2000) Create stream\nI0914 13:08:11.192288 2514 log.go:181] (0xc0008a9340) (0xc0006b2000) Stream added, broadcasting: 3\nI0914 13:08:11.193287 2514 log.go:181] (0xc0008a9340) Reply frame received for 3\nI0914 13:08:11.193333 2514 log.go:181] (0xc0008a9340) (0xc000cb20a0) Create stream\nI0914 13:08:11.193355 2514 log.go:181] (0xc0008a9340) (0xc000cb20a0) Stream added, broadcasting: 5\nI0914 13:08:11.194328 2514 log.go:181] (0xc0008a9340) Reply frame received for 5\nI0914 13:08:11.251545 2514 log.go:181] (0xc0008a9340) Data frame received for 3\nI0914 13:08:11.251693 2514 log.go:181] (0xc0006b2000) (3) Data frame handling\nI0914 13:08:11.251727 2514 log.go:181] (0xc0008a9340) Data frame received for 5\nI0914 13:08:11.251741 2514 log.go:181] (0xc000cb20a0) (5) Data frame handling\nI0914 13:08:11.251754 2514 log.go:181] (0xc000cb20a0) (5) Data frame sent\nI0914 13:08:11.251766 2514 log.go:181] (0xc0008a9340) Data frame received for 5\nI0914 13:08:11.251776 2514 log.go:181] (0xc000cb20a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.44.251 80\nConnection to 10.97.44.251 80 port [tcp/http] succeeded!\nI0914 13:08:11.253376 2514 log.go:181] (0xc0008a9340) Data frame received for 1\nI0914 13:08:11.253402 2514 log.go:181] (0xc000992780) (1) Data frame handling\nI0914 13:08:11.253422 2514 log.go:181] (0xc000992780) (1) Data frame sent\nI0914 13:08:11.253435 2514 log.go:181] (0xc0008a9340) (0xc000992780) Stream removed, broadcasting: 1\nI0914 13:08:11.253554 2514 log.go:181] (0xc0008a9340) Go away received\nI0914 13:08:11.253867 2514 log.go:181] (0xc0008a9340) (0xc000992780) Stream removed, broadcasting: 1\nI0914 13:08:11.253889 2514 log.go:181] (0xc0008a9340) (0xc0006b2000) Stream removed, broadcasting: 3\nI0914 13:08:11.253900 2514 log.go:181] (0xc0008a9340) (0xc000cb20a0) Stream removed, broadcasting: 5\n" Sep 14 13:08:11.258: INFO: stdout: "" Sep 14 13:08:11.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7394 execpod-affinity2c58v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31513' Sep 14 13:08:11.477: INFO: stderr: "I0914 13:08:11.381365 2532 log.go:181] (0xc0001f6160) (0xc000d4c1e0) Create stream\nI0914 13:08:11.381409 2532 log.go:181] (0xc0001f6160) (0xc000d4c1e0) Stream added, broadcasting: 1\nI0914 13:08:11.383375 2532 log.go:181] (0xc0001f6160) Reply frame received for 1\nI0914 13:08:11.383405 2532 log.go:181] (0xc0001f6160) (0xc000d4c280) Create stream\nI0914 13:08:11.383414 2532 log.go:181] (0xc0001f6160) (0xc000d4c280) Stream added, broadcasting: 3\nI0914 13:08:11.384127 2532 log.go:181] (0xc0001f6160) Reply frame received for 3\nI0914 13:08:11.384242 2532 log.go:181] (0xc0001f6160) (0xc000d4c320) Create stream\nI0914 13:08:11.384255 2532 log.go:181] (0xc0001f6160) (0xc000d4c320) Stream added, broadcasting: 5\nI0914 13:08:11.385046 2532 log.go:181] (0xc0001f6160) Reply frame received for 5\nI0914 13:08:11.466531 2532 log.go:181] (0xc0001f6160) Data frame received for 5\nI0914 13:08:11.466581 2532 log.go:181] (0xc000d4c320) (5) Data frame handling\nI0914 13:08:11.466613 2532 log.go:181] (0xc000d4c320) (5) Data frame sent\nI0914 13:08:11.466631 2532 log.go:181] (0xc0001f6160) Data frame received for 5\nI0914 13:08:11.466647 2532 log.go:181] (0xc000d4c320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31513\nConnection to 172.18.0.15 31513 port [tcp/31513] succeeded!\nI0914 13:08:11.466743 2532 log.go:181] (0xc000d4c320) (5) Data frame sent\nI0914 13:08:11.466787 2532 log.go:181] (0xc0001f6160) Data frame received for 3\nI0914 13:08:11.466815 2532 log.go:181] (0xc000d4c280) (3) Data frame handling\nI0914 13:08:11.466850 2532 log.go:181] (0xc0001f6160) Data frame received for 5\nI0914 13:08:11.466878 2532 log.go:181] (0xc000d4c320) (5) Data frame handling\nI0914 13:08:11.471476 2532 log.go:181] (0xc0001f6160) Data frame received for 1\nI0914 13:08:11.471512 2532 log.go:181] (0xc000d4c1e0) (1) Data frame handling\nI0914 13:08:11.471532 2532 log.go:181] (0xc000d4c1e0) (1) Data frame sent\nI0914 13:08:11.471557 2532 log.go:181] (0xc0001f6160) (0xc000d4c1e0) Stream removed, broadcasting: 1\nI0914 13:08:11.471584 2532 log.go:181] (0xc0001f6160) Go away received\nI0914 13:08:11.472366 2532 log.go:181] (0xc0001f6160) (0xc000d4c1e0) Stream removed, broadcasting: 1\nI0914 13:08:11.472399 2532 log.go:181] (0xc0001f6160) (0xc000d4c280) Stream removed, broadcasting: 3\nI0914 13:08:11.472417 2532 log.go:181] (0xc0001f6160) (0xc000d4c320) Stream removed, broadcasting: 5\n" Sep 14 13:08:11.477: INFO: stdout: "" Sep 14 13:08:11.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7394 execpod-affinity2c58v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31513' Sep 14 13:08:11.701: INFO: stderr: "I0914 13:08:11.630838 2550 log.go:181] (0xc0009bd3f0) (0xc0009308c0) Create stream\nI0914 13:08:11.630883 2550 log.go:181] (0xc0009bd3f0) (0xc0009308c0) Stream added, broadcasting: 1\nI0914 13:08:11.636209 2550 log.go:181] (0xc0009bd3f0) Reply frame received for 1\nI0914 13:08:11.636268 2550 log.go:181] (0xc0009bd3f0) (0xc00063c320) Create stream\nI0914 13:08:11.636291 2550 log.go:181] (0xc0009bd3f0) (0xc00063c320) Stream added, broadcasting: 3\nI0914 13:08:11.637335 2550 log.go:181] (0xc0009bd3f0) Reply frame received for 3\nI0914 13:08:11.637387 2550 log.go:181] (0xc0009bd3f0) (0xc000930000) Create stream\nI0914 13:08:11.637402 2550 log.go:181] (0xc0009bd3f0) (0xc000930000) Stream added, broadcasting: 5\nI0914 13:08:11.638345 2550 log.go:181] (0xc0009bd3f0) Reply frame received for 5\nI0914 13:08:11.694444 2550 log.go:181] (0xc0009bd3f0) Data frame received for 5\nI0914 13:08:11.694473 2550 log.go:181] (0xc000930000) (5) Data frame handling\nI0914 13:08:11.694495 2550 log.go:181] (0xc000930000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 31513\nI0914 13:08:11.694729 2550 log.go:181] (0xc0009bd3f0) Data frame received for 5\nI0914 13:08:11.694767 2550 log.go:181] (0xc000930000) (5) Data frame handling\nI0914 13:08:11.694802 2550 log.go:181] (0xc000930000) (5) Data frame sent\nConnection to 172.18.0.16 31513 port [tcp/31513] succeeded!\nI0914 13:08:11.695173 2550 log.go:181] (0xc0009bd3f0) Data frame received for 5\nI0914 13:08:11.695201 2550 log.go:181] (0xc000930000) (5) Data frame handling\nI0914 13:08:11.695362 2550 log.go:181] (0xc0009bd3f0) Data frame received for 3\nI0914 13:08:11.695395 2550 log.go:181] (0xc00063c320) (3) Data frame handling\nI0914 13:08:11.696663 2550 log.go:181] (0xc0009bd3f0) Data frame received for 1\nI0914 13:08:11.696693 2550 log.go:181] (0xc0009308c0) (1) Data frame handling\nI0914 13:08:11.696714 2550 log.go:181] (0xc0009308c0) (1) Data frame sent\nI0914 13:08:11.696732 2550 log.go:181] (0xc0009bd3f0) (0xc0009308c0) Stream removed, broadcasting: 1\nI0914 13:08:11.696758 2550 log.go:181] (0xc0009bd3f0) Go away received\nI0914 13:08:11.697251 2550 log.go:181] (0xc0009bd3f0) (0xc0009308c0) Stream removed, broadcasting: 1\nI0914 13:08:11.697290 2550 log.go:181] (0xc0009bd3f0) (0xc00063c320) Stream removed, broadcasting: 3\nI0914 13:08:11.697310 2550 log.go:181] (0xc0009bd3f0) (0xc000930000) Stream removed, broadcasting: 5\n" Sep 14 13:08:11.701: INFO: stdout: "" Sep 14 13:08:11.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-7394 execpod-affinity2c58v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31513/ ; done' Sep 14 13:08:12.014: INFO: stderr: "I0914 13:08:11.850024 2568 log.go:181] (0xc00003a0b0) (0xc000a7a000) Create stream\nI0914 13:08:11.850095 2568 log.go:181] (0xc00003a0b0) (0xc000a7a000) Stream added, broadcasting: 1\nI0914 13:08:11.851944 2568 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0914 13:08:11.851997 2568 log.go:181] (0xc00003a0b0) (0xc0009cb4a0) Create stream\nI0914 13:08:11.852011 2568 log.go:181] (0xc00003a0b0) (0xc0009cb4a0) Stream added, broadcasting: 3\nI0914 13:08:11.853232 2568 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0914 13:08:11.853273 2568 log.go:181] (0xc00003a0b0) (0xc0009d4780) Create stream\nI0914 13:08:11.853287 2568 log.go:181] (0xc00003a0b0) (0xc0009d4780) Stream added, broadcasting: 5\nI0914 13:08:11.854309 2568 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0914 13:08:11.918157 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.918182 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.918194 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.918203 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.918208 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.918216 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.925215 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.925250 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.925286 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.926002 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.926019 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.926034 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.926053 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.926065 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.926075 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.933456 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.933473 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.933485 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.934242 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.934267 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.934279 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.934335 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.934356 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.934371 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.939090 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.939116 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.939143 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.939841 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.939879 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.939907 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.939933 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.939953 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.939981 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.946686 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.946705 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.946724 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.947394 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.947424 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.947438 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.947453 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.947465 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.947478 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.951949 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.951969 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.951988 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.953061 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.953085 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.953098 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.953117 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.953130 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.953154 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.958394 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.958422 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.958450 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.959419 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.959460 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.959525 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.959559 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.959580 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.959610 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.962845 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.962871 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.962891 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.963585 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.963602 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.963615 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.963635 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.963649 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.963658 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.968205 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.968220 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.968233 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.969089 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.969107 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.969115 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.969124 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.969130 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.969144 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.975419 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.975443 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.975462 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.976318 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.976333 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.976341 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.976359 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.976377 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.976398 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.979285 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.979313 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.979333 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.979779 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.979795 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.979805 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.979818 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.979825 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.979832 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\nI0914 13:08:11.979839 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.979846 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.979861 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\nI0914 13:08:11.983606 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.983624 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.983641 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.983967 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.983990 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.983999 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.984018 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.984036 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.984048 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.987706 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.987723 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.987738 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.988378 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.988402 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.988414 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\nI0914 13:08:11.988424 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.988433 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/I0914 13:08:11.988457 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.988471 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.988484 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\n\nI0914 13:08:11.988506 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\nI0914 13:08:11.992399 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.992415 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.992430 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.992852 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.992866 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.992877 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.992894 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.992905 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.992913 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:11.997346 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.997365 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.997376 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.998093 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:11.998120 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:11.998129 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:11.998140 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:11.998146 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:11.998151 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:12.001598 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:12.001620 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:12.001641 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:12.002007 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:12.002036 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:12.002059 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:12.002110 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:12.002128 2568 log.go:181] (0xc0009d4780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31513/\nI0914 13:08:12.002146 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:12.007124 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:12.007148 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:12.007161 2568 log.go:181] (0xc0009cb4a0) (3) Data frame sent\nI0914 13:08:12.007768 2568 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:08:12.007799 2568 log.go:181] (0xc0009d4780) (5) Data frame handling\nI0914 13:08:12.007829 2568 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:08:12.007851 2568 log.go:181] (0xc0009cb4a0) (3) Data frame handling\nI0914 13:08:12.009709 2568 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0914 13:08:12.009729 2568 log.go:181] (0xc000a7a000) (1) Data frame handling\nI0914 13:08:12.009742 2568 log.go:181] (0xc000a7a000) (1) Data frame sent\nI0914 13:08:12.009833 2568 log.go:181] (0xc00003a0b0) (0xc000a7a000) Stream removed, broadcasting: 1\nI0914 13:08:12.009854 2568 log.go:181] (0xc00003a0b0) Go away received\nI0914 13:08:12.010220 2568 log.go:181] (0xc00003a0b0) (0xc000a7a000) Stream removed, broadcasting: 1\nI0914 13:08:12.010240 2568 log.go:181] (0xc00003a0b0) (0xc0009cb4a0) Stream removed, broadcasting: 3\nI0914 13:08:12.010249 2568 log.go:181] (0xc00003a0b0) (0xc0009d4780) Stream removed, broadcasting: 5\n" Sep 14 13:08:12.015: INFO: stdout: "\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm\naffinity-nodeport-p7lmm" Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Received response from host: affinity-nodeport-p7lmm Sep 14 13:08:12.015: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-7394, will wait for the garbage collector to delete the pods Sep 14 13:08:12.105: INFO: Deleting ReplicationController affinity-nodeport took: 6.47778ms Sep 14 13:08:14.405: INFO: Terminating ReplicationController affinity-nodeport pods took: 2.300262713s [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:08:26.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7394" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.655 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":224,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:08:26.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 14 13:08:26.122: INFO: >>> kubeConfig: /root/.kube/config Sep 14 13:08:28.091: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:08:38.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9774" for this suite. • [SLOW TEST:12.868 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":225,"skipped":3382,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:08:38.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 14 13:08:39.019: INFO: Waiting up to 1m0s for all nodes to be ready Sep 14 13:09:39.043: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:09:39.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 14 13:09:43.147: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:09:59.348: INFO: pods created so far: [1 1 1] Sep 14 13:09:59.348: INFO: length of pods created so far: 3 Sep 14 13:10:11.357: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:18.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7560" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:18.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7110" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:99.587 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":226,"skipped":3394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:10:18.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:10:18.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352" in namespace "projected-6560" to be "Succeeded or Failed" Sep 14 13:10:18.627: INFO: Pod "downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352": Phase="Pending", Reason="", readiness=false. Elapsed: 3.498167ms Sep 14 13:10:20.633: INFO: Pod "downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009852547s Sep 14 13:10:22.638: INFO: Pod "downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014945282s STEP: Saw pod success Sep 14 13:10:22.638: INFO: Pod "downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352" satisfied condition "Succeeded or Failed" Sep 14 13:10:22.642: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352 container client-container: STEP: delete the pod Sep 14 13:10:22.686: INFO: Waiting for pod downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352 to disappear Sep 14 13:10:22.698: INFO: Pod downwardapi-volume-fdd1fabf-e479-42e0-9fbd-5672426e4352 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:22.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6560" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3420,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:10:22.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 14 13:10:23.379: INFO: created pod pod-service-account-defaultsa Sep 14 13:10:23.379: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 14 13:10:23.387: INFO: created pod pod-service-account-mountsa Sep 14 13:10:23.387: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 14 13:10:23.414: INFO: created pod pod-service-account-nomountsa Sep 14 13:10:23.414: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 14 13:10:23.485: INFO: created pod pod-service-account-defaultsa-mountspec Sep 14 13:10:23.485: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 14 13:10:23.495: INFO: created pod pod-service-account-mountsa-mountspec Sep 14 13:10:23.495: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 14 13:10:23.544: INFO: created pod pod-service-account-nomountsa-mountspec Sep 14 13:10:23.544: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 14 13:10:23.630: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 14 13:10:23.630: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 14 13:10:23.694: INFO: created pod pod-service-account-mountsa-nomountspec Sep 14 13:10:23.694: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 14 13:10:23.755: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 14 13:10:23.755: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:23.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3897" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":228,"skipped":3441,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:10:24.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:25.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9975" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":229,"skipped":3452,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:10:26.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 14 13:10:27.183: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 14 13:10:27.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:44.795: INFO: stderr: "" Sep 14 13:10:44.796: INFO: stdout: "service/agnhost-replica created\n" Sep 14 13:10:44.796: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 14 13:10:44.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:46.091: INFO: stderr: "" Sep 14 13:10:46.091: INFO: stdout: "service/agnhost-primary created\n" Sep 14 13:10:46.092: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 14 13:10:46.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:46.411: INFO: stderr: "" Sep 14 13:10:46.411: INFO: stdout: "service/frontend created\n" Sep 14 13:10:46.411: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 14 13:10:46.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:46.715: INFO: stderr: "" Sep 14 13:10:46.715: INFO: stdout: "deployment.apps/frontend created\n" Sep 14 13:10:46.716: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 14 13:10:46.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:47.120: INFO: stderr: "" Sep 14 13:10:47.120: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 14 13:10:47.121: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 14 13:10:47.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-34' Sep 14 13:10:47.425: INFO: stderr: "" Sep 14 13:10:47.425: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 14 13:10:47.425: INFO: Waiting for all frontend pods to be Running. Sep 14 13:10:57.475: INFO: Waiting for frontend to serve content. Sep 14 13:10:57.485: INFO: Trying to add a new entry to the guestbook. Sep 14 13:10:57.497: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 14 13:10:57.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:57.675: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:57.675: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 14 13:10:57.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:57.860: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:57.860: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 14 13:10:57.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:58.056: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:58.056: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 14 13:10:58.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:58.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:58.164: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 14 13:10:58.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:58.363: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:58.363: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 14 13:10:58.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-34' Sep 14 13:10:58.903: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:10:58.903: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:10:58.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-34" for this suite. • [SLOW TEST:32.505 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":230,"skipped":3471,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:10:58.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 14 13:10:59.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7691' Sep 14 13:10:59.721: INFO: stderr: "" Sep 14 13:10:59.721: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 14 13:10:59.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7691' Sep 14 13:10:59.946: INFO: stderr: "" Sep 14 13:10:59.946: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-14T13:10:59Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-14T13:10:59Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7691\",\n \"resourceVersion\": \"279434\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7691/pods/e2e-test-httpd-pod\",\n \"uid\": \"9673eaf6-2f08-4de4-9fee-40f70e72b0bc\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-srn5d\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-srn5d\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-srn5d\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-14T13:10:59Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Sep 14 13:10:59.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7691' Sep 14 13:11:00.688: INFO: stderr: "W0914 13:11:00.022223 2839 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 14 13:11:00.688: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 14 13:11:00.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7691' Sep 14 13:11:05.923: INFO: stderr: "" Sep 14 13:11:05.923: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:05.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7691" for this suite. • [SLOW TEST:6.962 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":231,"skipped":3484,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:05.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:11.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7667" for this suite. • [SLOW TEST:5.171 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":232,"skipped":3489,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:11.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:11:11.239: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 14 13:11:16.251: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 14 13:11:16.251: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 14 13:11:18.255: INFO: Creating deployment "test-rollover-deployment" Sep 14 13:11:18.265: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 14 13:11:20.272: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 14 13:11:20.297: INFO: Ensure that both replica sets have 1 created replica Sep 14 13:11:20.303: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 14 13:11:20.310: INFO: Updating deployment test-rollover-deployment Sep 14 13:11:20.310: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 14 13:11:22.474: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 14 13:11:22.481: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 14 13:11:22.487: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:22.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685880, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:24.498: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:24.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:26.494: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:26.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:28.495: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:28.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:30.495: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:30.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:32.496: INFO: all replica sets need to contain the pod-template-hash label Sep 14 13:11:32.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685883, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685878, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:11:34.496: INFO: Sep 14 13:11:34.496: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 14 13:11:34.503: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-489 /apis/apps/v1/namespaces/deployment-489/deployments/test-rollover-deployment 2ed6d11c-9878-41ed-9229-014c906bd459 279713 2 2020-09-14 13:11:18 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-14 13:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 13:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0069c0d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-14 13:11:18 +0000 UTC,LastTransitionTime:2020-09-14 13:11:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-14 13:11:33 +0000 UTC,LastTransitionTime:2020-09-14 13:11:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 14 13:11:34.506: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-489 /apis/apps/v1/namespaces/deployment-489/replicasets/test-rollover-deployment-5797c7764 c68969c7-6487-42cf-9dbd-40a12d9d0298 279702 2 2020-09-14 13:11:20 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2ed6d11c-9878-41ed-9229-014c906bd459 0xc0069c1230 0xc0069c1231}] [] [{kube-controller-manager Update apps/v1 2020-09-14 13:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ed6d11c-9878-41ed-9229-014c906bd459\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0069c12a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 14 13:11:34.506: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 14 13:11:34.506: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-489 /apis/apps/v1/namespaces/deployment-489/replicasets/test-rollover-controller 316c0359-91e6-4161-8e2d-73190cc93529 279712 2 2020-09-14 13:11:11 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2ed6d11c-9878-41ed-9229-014c906bd459 0xc0069c1127 0xc0069c1128}] [] [{e2e.test Update apps/v1 2020-09-14 13:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 13:11:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ed6d11c-9878-41ed-9229-014c906bd459\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0069c11c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 13:11:34.506: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-489 /apis/apps/v1/namespaces/deployment-489/replicasets/test-rollover-deployment-78bc8b888c f7812228-aeb1-4c87-8d12-fde7bc3e3f24 279652 2 2020-09-14 13:11:18 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2ed6d11c-9878-41ed-9229-014c906bd459 0xc0069c1317 0xc0069c1318}] [] [{kube-controller-manager Update apps/v1 2020-09-14 13:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ed6d11c-9878-41ed-9229-014c906bd459\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0069c13a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 13:11:34.509: INFO: Pod "test-rollover-deployment-5797c7764-8jl9c" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-8jl9c test-rollover-deployment-5797c7764- deployment-489 /api/v1/namespaces/deployment-489/pods/test-rollover-deployment-5797c7764-8jl9c 69c792a3-b760-4709-91fe-a5dcf835f92c 279668 0 2020-09-14 13:11:20 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 c68969c7-6487-42cf-9dbd-40a12d9d0298 0xc003df0a80 0xc003df0a81}] [] [{kube-controller-manager Update v1 2020-09-14 13:11:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c68969c7-6487-42cf-9dbd-40a12d9d0298\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 13:11:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cpc9w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cpc9w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cpc9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:11:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:11:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:11:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.27,StartTime:2020-09-14 13:11:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 13:11:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://85fb7a3ef855cb8e29665c5286682af1d1a3b18e8f4f80adc66853b3ecafccbd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:34.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-489" for this suite. • [SLOW TEST:23.390 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":233,"skipped":3507,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:34.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:11:34.564: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:35.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8496" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":234,"skipped":3543,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:35.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 14 13:11:35.895: INFO: Waiting up to 5m0s for pod "downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14" in namespace "downward-api-7924" to be "Succeeded or Failed" Sep 14 13:11:35.916: INFO: Pod "downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20648ms Sep 14 13:11:37.929: INFO: Pod "downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033857199s Sep 14 13:11:39.960: INFO: Pod "downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064972563s STEP: Saw pod success Sep 14 13:11:39.960: INFO: Pod "downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14" satisfied condition "Succeeded or Failed" Sep 14 13:11:39.975: INFO: Trying to get logs from node latest-worker2 pod downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14 container dapi-container: STEP: delete the pod Sep 14 13:11:40.140: INFO: Waiting for pod downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14 to disappear Sep 14 13:11:40.175: INFO: Pod downward-api-95d2ff04-11c4-4dc3-aadb-38106cc85c14 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:40.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7924" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3544,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:40.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 14 13:11:40.279: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:49.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4623" for this suite. • [SLOW TEST:9.111 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":236,"skipped":3545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:49.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:11:49.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9" in namespace "projected-5663" to be "Succeeded or Failed" Sep 14 13:11:49.546: INFO: Pod "downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 80.555443ms Sep 14 13:11:51.557: INFO: Pod "downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092072103s Sep 14 13:11:53.562: INFO: Pod "downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097170092s STEP: Saw pod success Sep 14 13:11:53.562: INFO: Pod "downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9" satisfied condition "Succeeded or Failed" Sep 14 13:11:53.565: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9 container client-container: STEP: delete the pod Sep 14 13:11:53.680: INFO: Waiting for pod downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9 to disappear Sep 14 13:11:53.684: INFO: Pod downwardapi-volume-a9bb35f1-4f60-4151-9be0-0a89a823a9b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:11:53.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5663" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:11:53.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-mtdg STEP: Creating a pod to test atomic-volume-subpath Sep 14 13:11:53.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mtdg" in namespace "subpath-9464" to be "Succeeded or Failed" Sep 14 13:11:53.863: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Pending", Reason="", readiness=false. Elapsed: 46.851282ms Sep 14 13:11:55.870: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053606291s Sep 14 13:11:57.876: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058969938s Sep 14 13:11:59.880: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 6.063896616s Sep 14 13:12:01.886: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 8.069033271s Sep 14 13:12:03.891: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 10.073999869s Sep 14 13:12:05.895: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 12.078106181s Sep 14 13:12:07.900: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 14.083461706s Sep 14 13:12:09.905: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 16.08864254s Sep 14 13:12:11.910: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 18.093683061s Sep 14 13:12:13.915: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 20.098733681s Sep 14 13:12:15.920: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 22.103623922s Sep 14 13:12:17.925: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Running", Reason="", readiness=true. Elapsed: 24.108123584s Sep 14 13:12:19.929: INFO: Pod "pod-subpath-test-configmap-mtdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.112919303s STEP: Saw pod success Sep 14 13:12:19.930: INFO: Pod "pod-subpath-test-configmap-mtdg" satisfied condition "Succeeded or Failed" Sep 14 13:12:19.933: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-mtdg container test-container-subpath-configmap-mtdg: STEP: delete the pod Sep 14 13:12:19.968: INFO: Waiting for pod pod-subpath-test-configmap-mtdg to disappear Sep 14 13:12:19.978: INFO: Pod pod-subpath-test-configmap-mtdg no longer exists STEP: Deleting pod pod-subpath-test-configmap-mtdg Sep 14 13:12:19.978: INFO: Deleting pod "pod-subpath-test-configmap-mtdg" in namespace "subpath-9464" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:19.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9464" for this suite. • [SLOW TEST:26.293 seconds] [sig-storage] Subpath /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":238,"skipped":3678,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:19.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 14 13:12:20.589: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 14 13:12:22.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685940, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685940, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685940, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685940, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 13:12:25.684: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:12:25.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:26.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5319" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.956 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":239,"skipped":3689,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:26.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 13:12:27.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 13:12:29.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:12:31.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735685947, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 13:12:34.866: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:12:34.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4529-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:36.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2745" for this suite. STEP: Destroying namespace "webhook-2745-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.394 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":240,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:36.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-f38960e1-bf8a-4122-bc93-87f8dda997e8 STEP: Creating a pod to test consume configMaps Sep 14 13:12:36.957: INFO: Waiting up to 5m0s for pod "pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c" in namespace "configmap-8622" to be "Succeeded or Failed" Sep 14 13:12:37.067: INFO: Pod "pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.005424ms Sep 14 13:12:39.072: INFO: Pod "pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114435002s Sep 14 13:12:41.084: INFO: Pod "pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126529088s STEP: Saw pod success Sep 14 13:12:41.084: INFO: Pod "pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c" satisfied condition "Succeeded or Failed" Sep 14 13:12:41.086: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c container configmap-volume-test: STEP: delete the pod Sep 14 13:12:41.118: INFO: Waiting for pod pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c to disappear Sep 14 13:12:41.125: INFO: Pod pod-configmaps-56c2326e-b777-4252-af6c-04db846df43c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:41.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8622" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:41.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:12:41.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config version' Sep 14 13:12:41.376: INFO: stderr: "" Sep 14 13:12:41.376: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.2-rc.0\", GitCommit:\"02b16f24873ef5ed4b0ad85d704237a2c1cbfb6e\", GitTreeState:\"clean\", BuildDate:\"2020-09-09T11:47:06Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:41.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4612" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":242,"skipped":3776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:41.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:12:41.522: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 14 13:12:43.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7331 create -f -' Sep 14 13:12:46.843: INFO: stderr: "" Sep 14 13:12:46.843: INFO: stdout: "e2e-test-crd-publish-openapi-9561-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 14 13:12:46.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7331 delete e2e-test-crd-publish-openapi-9561-crds test-cr' Sep 14 13:12:46.953: INFO: stderr: "" Sep 14 13:12:46.953: INFO: stdout: "e2e-test-crd-publish-openapi-9561-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 14 13:12:46.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7331 apply -f -' Sep 14 13:12:47.244: INFO: stderr: "" Sep 14 13:12:47.244: INFO: stdout: "e2e-test-crd-publish-openapi-9561-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 14 13:12:47.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7331 delete e2e-test-crd-publish-openapi-9561-crds test-cr' Sep 14 13:12:47.348: INFO: stderr: "" Sep 14 13:12:47.348: INFO: stdout: "e2e-test-crd-publish-openapi-9561-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 14 13:12:47.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9561-crds' Sep 14 13:12:47.630: INFO: stderr: "" Sep 14 13:12:47.630: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9561-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:50.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7331" for this suite. • [SLOW TEST:9.256 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":243,"skipped":3812,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:50.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 14 13:12:50.718: INFO: Waiting up to 5m0s for pod "downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f" in namespace "downward-api-7104" to be "Succeeded or Failed" Sep 14 13:12:50.755: INFO: Pod "downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.942076ms Sep 14 13:12:52.759: INFO: Pod "downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041188925s Sep 14 13:12:54.763: INFO: Pod "downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045553335s STEP: Saw pod success Sep 14 13:12:54.764: INFO: Pod "downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f" satisfied condition "Succeeded or Failed" Sep 14 13:12:54.767: INFO: Trying to get logs from node latest-worker2 pod downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f container dapi-container: STEP: delete the pod Sep 14 13:12:54.798: INFO: Waiting for pod downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f to disappear Sep 14 13:12:54.833: INFO: Pod downward-api-00eb6aa3-bf68-4794-b764-567a7c44398f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:54.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7104" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":3814,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:54.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:12:54.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47" in namespace "projected-8901" to be "Succeeded or Failed" Sep 14 13:12:54.947: INFO: Pod "downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47": Phase="Pending", Reason="", readiness=false. Elapsed: 16.405614ms Sep 14 13:12:56.951: INFO: Pod "downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020377512s Sep 14 13:12:58.956: INFO: Pod "downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025819898s STEP: Saw pod success Sep 14 13:12:58.956: INFO: Pod "downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47" satisfied condition "Succeeded or Failed" Sep 14 13:12:58.959: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47 container client-container: STEP: delete the pod Sep 14 13:12:58.989: INFO: Waiting for pod downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47 to disappear Sep 14 13:12:59.000: INFO: Pod downwardapi-volume-9692ef1e-2cfc-4a88-8a64-d9fe683a8e47 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:12:59.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8901" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":3822,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:12:59.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:12:59.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa" in namespace "downward-api-2607" to be "Succeeded or Failed" Sep 14 13:12:59.110: INFO: Pod "downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 12.527207ms Sep 14 13:13:01.115: INFO: Pod "downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017856688s Sep 14 13:13:03.408: INFO: Pod "downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31106222s STEP: Saw pod success Sep 14 13:13:03.408: INFO: Pod "downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa" satisfied condition "Succeeded or Failed" Sep 14 13:13:03.411: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa container client-container: STEP: delete the pod Sep 14 13:13:03.445: INFO: Waiting for pod downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa to disappear Sep 14 13:13:03.459: INFO: Pod downwardapi-volume-158cae92-3c95-4b8e-bf05-42a028a60eaa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:03.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2607" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":3824,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:03.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4363 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 14 13:13:03.585: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 14 13:13:03.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 13:13:05.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 14 13:13:07.708: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:09.670: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:11.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:13.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:15.670: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:17.670: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:19.670: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:21.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 14 13:13:23.670: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 14 13:13:23.676: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 14 13:13:25.681: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 14 13:13:29.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.237:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:13:29.732: INFO: >>> kubeConfig: /root/.kube/config I0914 13:13:29.764338 7 log.go:181] (0xc005f64630) (0xc00376ed20) Create stream I0914 13:13:29.764374 7 log.go:181] (0xc005f64630) (0xc00376ed20) Stream added, broadcasting: 1 I0914 13:13:29.766431 7 log.go:181] (0xc005f64630) Reply frame received for 1 I0914 13:13:29.766465 7 log.go:181] (0xc005f64630) (0xc003dcf680) Create stream I0914 13:13:29.766482 7 log.go:181] (0xc005f64630) (0xc003dcf680) Stream added, broadcasting: 3 I0914 13:13:29.767342 7 log.go:181] (0xc005f64630) Reply frame received for 3 I0914 13:13:29.767376 7 log.go:181] (0xc005f64630) (0xc003836d20) Create stream I0914 13:13:29.767388 7 log.go:181] (0xc005f64630) (0xc003836d20) Stream added, broadcasting: 5 I0914 13:13:29.768285 7 log.go:181] (0xc005f64630) Reply frame received for 5 I0914 13:13:29.856622 7 log.go:181] (0xc005f64630) Data frame received for 5 I0914 13:13:29.856658 7 log.go:181] (0xc003836d20) (5) Data frame handling I0914 13:13:29.856702 7 log.go:181] (0xc005f64630) Data frame received for 3 I0914 13:13:29.856737 7 log.go:181] (0xc003dcf680) (3) Data frame handling I0914 13:13:29.856766 7 log.go:181] (0xc003dcf680) (3) Data frame sent I0914 13:13:29.856789 7 log.go:181] (0xc005f64630) Data frame received for 3 I0914 13:13:29.856806 7 log.go:181] (0xc003dcf680) (3) Data frame handling I0914 13:13:29.858288 7 log.go:181] (0xc005f64630) Data frame received for 1 I0914 13:13:29.858313 7 log.go:181] (0xc00376ed20) (1) Data frame handling I0914 13:13:29.858333 7 log.go:181] (0xc00376ed20) (1) Data frame sent I0914 13:13:29.858395 7 log.go:181] (0xc005f64630) (0xc00376ed20) Stream removed, broadcasting: 1 I0914 13:13:29.858429 7 log.go:181] (0xc005f64630) Go away received I0914 13:13:29.858526 7 log.go:181] (0xc005f64630) (0xc00376ed20) Stream removed, broadcasting: 1 I0914 13:13:29.858562 7 log.go:181] (0xc005f64630) (0xc003dcf680) Stream removed, broadcasting: 3 I0914 13:13:29.858581 7 log.go:181] (0xc005f64630) (0xc003836d20) Stream removed, broadcasting: 5 Sep 14 13:13:29.858: INFO: Found all expected endpoints: [netserver-0] Sep 14 13:13:29.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.38:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:13:29.862: INFO: >>> kubeConfig: /root/.kube/config I0914 13:13:29.895822 7 log.go:181] (0xc00346e580) (0xc003dcfc20) Create stream I0914 13:13:29.895864 7 log.go:181] (0xc00346e580) (0xc003dcfc20) Stream added, broadcasting: 1 I0914 13:13:29.898339 7 log.go:181] (0xc00346e580) Reply frame received for 1 I0914 13:13:29.898392 7 log.go:181] (0xc00346e580) (0xc000da4000) Create stream I0914 13:13:29.898410 7 log.go:181] (0xc00346e580) (0xc000da4000) Stream added, broadcasting: 3 I0914 13:13:29.899523 7 log.go:181] (0xc00346e580) Reply frame received for 3 I0914 13:13:29.899567 7 log.go:181] (0xc00346e580) (0xc003d2d720) Create stream I0914 13:13:29.899586 7 log.go:181] (0xc00346e580) (0xc003d2d720) Stream added, broadcasting: 5 I0914 13:13:29.900653 7 log.go:181] (0xc00346e580) Reply frame received for 5 I0914 13:13:29.967177 7 log.go:181] (0xc00346e580) Data frame received for 3 I0914 13:13:29.967214 7 log.go:181] (0xc000da4000) (3) Data frame handling I0914 13:13:29.967239 7 log.go:181] (0xc000da4000) (3) Data frame sent I0914 13:13:29.967579 7 log.go:181] (0xc00346e580) Data frame received for 3 I0914 13:13:29.967609 7 log.go:181] (0xc000da4000) (3) Data frame handling I0914 13:13:29.967691 7 log.go:181] (0xc00346e580) Data frame received for 5 I0914 13:13:29.967715 7 log.go:181] (0xc003d2d720) (5) Data frame handling I0914 13:13:29.969345 7 log.go:181] (0xc00346e580) Data frame received for 1 I0914 13:13:29.969363 7 log.go:181] (0xc003dcfc20) (1) Data frame handling I0914 13:13:29.969372 7 log.go:181] (0xc003dcfc20) (1) Data frame sent I0914 13:13:29.969400 7 log.go:181] (0xc00346e580) (0xc003dcfc20) Stream removed, broadcasting: 1 I0914 13:13:29.969445 7 log.go:181] (0xc00346e580) Go away received I0914 13:13:29.969485 7 log.go:181] (0xc00346e580) (0xc003dcfc20) Stream removed, broadcasting: 1 I0914 13:13:29.969512 7 log.go:181] (0xc00346e580) (0xc000da4000) Stream removed, broadcasting: 3 I0914 13:13:29.969525 7 log.go:181] (0xc00346e580) (0xc003d2d720) Stream removed, broadcasting: 5 Sep 14 13:13:29.969: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:29.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4363" for this suite. • [SLOW TEST:26.509 seconds] [sig-network] Networking /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":3830,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:29.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 14 13:13:30.114: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:42.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3667" for this suite. • [SLOW TEST:12.594 seconds] [k8s.io] Pods /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:42.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 14 13:13:49.790: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:50.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7611" for this suite. • [SLOW TEST:8.287 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":249,"skipped":3914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:50.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 14 13:13:55.621: INFO: Successfully updated pod "labelsupdate289a59ec-da8b-453c-88ac-7593a4ee62ad" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:57.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3241" for this suite. • [SLOW TEST:7.248 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":3944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:58.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 14 13:13:58.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config api-versions' Sep 14 13:13:58.645: INFO: stderr: "" Sep 14 13:13:58.645: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:13:58.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3047" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":251,"skipped":4071,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:13:58.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1356.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1356.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1356.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1356.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1356.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:14:04.889: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-1356/dns-test-2a7f9816-1905-4a87-97d5-4575610d9dc8: Get "https://172.30.12.66:42909/api/v1/namespaces/dns-1356/pods/dns-test-2a7f9816-1905-4a87-97d5-4575610d9dc8/proxy/results/jessie_hosts@dns-querier-1": stream error: stream ID 16267; INTERNAL_ERROR Sep 14 13:14:04.895: INFO: Lookups using dns-1356/dns-test-2a7f9816-1905-4a87-97d5-4575610d9dc8 failed for: [jessie_hosts@dns-querier-1] Sep 14 13:14:09.920: INFO: DNS probes using dns-1356/dns-test-2a7f9816-1905-4a87-97d5-4575610d9dc8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:14:09.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1356" for this suite. • [SLOW TEST:11.362 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":252,"skipped":4073,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:14:10.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3414 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3414 STEP: creating replication controller externalsvc in namespace services-3414 I0914 13:14:10.639242 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3414, replica count: 2 I0914 13:14:13.689695 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:14:16.689968 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 14 13:14:16.805: INFO: Creating new exec pod Sep 14 13:14:20.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3414 execpod74spq -- /bin/sh -x -c nslookup nodeport-service.services-3414.svc.cluster.local' Sep 14 13:14:21.065: INFO: stderr: "I0914 13:14:20.961904 3003 log.go:181] (0xc0008c6f20) (0xc00080f180) Create stream\nI0914 13:14:20.961962 3003 log.go:181] (0xc0008c6f20) (0xc00080f180) Stream added, broadcasting: 1\nI0914 13:14:20.966388 3003 log.go:181] (0xc0008c6f20) Reply frame received for 1\nI0914 13:14:20.966416 3003 log.go:181] (0xc0008c6f20) (0xc0007ae640) Create stream\nI0914 13:14:20.966423 3003 log.go:181] (0xc0008c6f20) (0xc0007ae640) Stream added, broadcasting: 3\nI0914 13:14:20.967187 3003 log.go:181] (0xc0008c6f20) Reply frame received for 3\nI0914 13:14:20.967210 3003 log.go:181] (0xc0008c6f20) (0xc0007af360) Create stream\nI0914 13:14:20.967226 3003 log.go:181] (0xc0008c6f20) (0xc0007af360) Stream added, broadcasting: 5\nI0914 13:14:20.967856 3003 log.go:181] (0xc0008c6f20) Reply frame received for 5\nI0914 13:14:21.044917 3003 log.go:181] (0xc0008c6f20) Data frame received for 5\nI0914 13:14:21.044952 3003 log.go:181] (0xc0007af360) (5) Data frame handling\nI0914 13:14:21.044978 3003 log.go:181] (0xc0007af360) (5) Data frame sent\n+ nslookup nodeport-service.services-3414.svc.cluster.local\nI0914 13:14:21.057400 3003 log.go:181] (0xc0008c6f20) Data frame received for 3\nI0914 13:14:21.057427 3003 log.go:181] (0xc0007ae640) (3) Data frame handling\nI0914 13:14:21.057446 3003 log.go:181] (0xc0007ae640) (3) Data frame sent\nI0914 13:14:21.058155 3003 log.go:181] (0xc0008c6f20) Data frame received for 3\nI0914 13:14:21.058174 3003 log.go:181] (0xc0007ae640) (3) Data frame handling\nI0914 13:14:21.058191 3003 log.go:181] (0xc0007ae640) (3) Data frame sent\nI0914 13:14:21.058755 3003 log.go:181] (0xc0008c6f20) Data frame received for 3\nI0914 13:14:21.058776 3003 log.go:181] (0xc0007ae640) (3) Data frame handling\nI0914 13:14:21.058868 3003 log.go:181] (0xc0008c6f20) Data frame received for 5\nI0914 13:14:21.058880 3003 log.go:181] (0xc0007af360) (5) Data frame handling\nI0914 13:14:21.060478 3003 log.go:181] (0xc0008c6f20) Data frame received for 1\nI0914 13:14:21.060492 3003 log.go:181] (0xc00080f180) (1) Data frame handling\nI0914 13:14:21.060498 3003 log.go:181] (0xc00080f180) (1) Data frame sent\nI0914 13:14:21.060508 3003 log.go:181] (0xc0008c6f20) (0xc00080f180) Stream removed, broadcasting: 1\nI0914 13:14:21.060520 3003 log.go:181] (0xc0008c6f20) Go away received\nI0914 13:14:21.060830 3003 log.go:181] (0xc0008c6f20) (0xc00080f180) Stream removed, broadcasting: 1\nI0914 13:14:21.060845 3003 log.go:181] (0xc0008c6f20) (0xc0007ae640) Stream removed, broadcasting: 3\nI0914 13:14:21.060852 3003 log.go:181] (0xc0008c6f20) (0xc0007af360) Stream removed, broadcasting: 5\n" Sep 14 13:14:21.065: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3414.svc.cluster.local\tcanonical name = externalsvc.services-3414.svc.cluster.local.\nName:\texternalsvc.services-3414.svc.cluster.local\nAddress: 10.110.16.202\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3414, will wait for the garbage collector to delete the pods Sep 14 13:14:21.136: INFO: Deleting ReplicationController externalsvc took: 17.852898ms Sep 14 13:14:21.536: INFO: Terminating ReplicationController externalsvc pods took: 400.217941ms Sep 14 13:14:35.974: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:14:35.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3414" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.996 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":253,"skipped":4083,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:14:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-31f80c52-eb71-42a4-9209-28be0dfa3f8d STEP: Creating a pod to test consume configMaps Sep 14 13:14:36.105: INFO: Waiting up to 5m0s for pod "pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063" in namespace "configmap-4557" to be "Succeeded or Failed" Sep 14 13:14:36.108: INFO: Pod "pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331537ms Sep 14 13:14:38.111: INFO: Pod "pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005865347s Sep 14 13:14:40.116: INFO: Pod "pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0109923s STEP: Saw pod success Sep 14 13:14:40.116: INFO: Pod "pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063" satisfied condition "Succeeded or Failed" Sep 14 13:14:40.120: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063 container configmap-volume-test: STEP: delete the pod Sep 14 13:14:40.146: INFO: Waiting for pod pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063 to disappear Sep 14 13:14:40.221: INFO: Pod pod-configmaps-fbb9168b-86cd-4f37-9efc-6e2566bee063 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:14:40.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4557" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:14:40.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 14 13:14:40.303: INFO: Waiting up to 5m0s for pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734" in namespace "emptydir-3634" to be "Succeeded or Failed" Sep 14 13:14:40.343: INFO: Pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734": Phase="Pending", Reason="", readiness=false. Elapsed: 39.731478ms Sep 14 13:14:42.558: INFO: Pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255403997s Sep 14 13:14:44.562: INFO: Pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258717883s Sep 14 13:14:46.567: INFO: Pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.263566545s STEP: Saw pod success Sep 14 13:14:46.567: INFO: Pod "pod-babdd940-cb1a-4727-b8c9-183718d1b734" satisfied condition "Succeeded or Failed" Sep 14 13:14:46.570: INFO: Trying to get logs from node latest-worker2 pod pod-babdd940-cb1a-4727-b8c9-183718d1b734 container test-container: STEP: delete the pod Sep 14 13:14:46.616: INFO: Waiting for pod pod-babdd940-cb1a-4727-b8c9-183718d1b734 to disappear Sep 14 13:14:46.654: INFO: Pod pod-babdd940-cb1a-4727-b8c9-183718d1b734 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:14:46.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3634" for this suite. • [SLOW TEST:6.479 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4154,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:14:46.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-757 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-757 to expose endpoints map[] Sep 14 13:14:46.891: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Sep 14 13:14:47.924: INFO: successfully validated that service multi-endpoint-test in namespace services-757 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-757 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-757 to expose endpoints map[pod1:[100]] Sep 14 13:14:50.975: INFO: successfully validated that service multi-endpoint-test in namespace services-757 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-757 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-757 to expose endpoints map[pod1:[100] pod2:[101]] Sep 14 13:14:54.051: INFO: successfully validated that service multi-endpoint-test in namespace services-757 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-757 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-757 to expose endpoints map[pod2:[101]] Sep 14 13:14:54.097: INFO: successfully validated that service multi-endpoint-test in namespace services-757 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-757 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-757 to expose endpoints map[] Sep 14 13:14:55.130: INFO: successfully validated that service multi-endpoint-test in namespace services-757 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:14:55.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-757" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:8.470 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":256,"skipped":4167,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:14:55.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-546 STEP: creating replication controller nodeport-test in namespace services-546 I0914 13:14:55.331857 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-546, replica count: 2 I0914 13:14:58.382214 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:15:01.382386 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:15:01.382: INFO: Creating new exec pod Sep 14 13:15:06.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-546 execpodlghtv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 14 13:15:06.697: INFO: stderr: "I0914 13:15:06.604020 3021 log.go:181] (0xc000857290) (0xc0008b8a00) Create stream\nI0914 13:15:06.604073 3021 log.go:181] (0xc000857290) (0xc0008b8a00) Stream added, broadcasting: 1\nI0914 13:15:06.607449 3021 log.go:181] (0xc000857290) Reply frame received for 1\nI0914 13:15:06.607483 3021 log.go:181] (0xc000857290) (0xc000c480a0) Create stream\nI0914 13:15:06.607494 3021 log.go:181] (0xc000857290) (0xc000c480a0) Stream added, broadcasting: 3\nI0914 13:15:06.608219 3021 log.go:181] (0xc000857290) Reply frame received for 3\nI0914 13:15:06.608247 3021 log.go:181] (0xc000857290) (0xc000796280) Create stream\nI0914 13:15:06.608256 3021 log.go:181] (0xc000857290) (0xc000796280) Stream added, broadcasting: 5\nI0914 13:15:06.608880 3021 log.go:181] (0xc000857290) Reply frame received for 5\nI0914 13:15:06.690794 3021 log.go:181] (0xc000857290) Data frame received for 5\nI0914 13:15:06.690860 3021 log.go:181] (0xc000796280) (5) Data frame handling\nI0914 13:15:06.690886 3021 log.go:181] (0xc000796280) (5) Data frame sent\nI0914 13:15:06.690906 3021 log.go:181] (0xc000857290) Data frame received for 5\nI0914 13:15:06.690923 3021 log.go:181] (0xc000796280) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0914 13:15:06.690967 3021 log.go:181] (0xc000857290) Data frame received for 3\nI0914 13:15:06.690985 3021 log.go:181] (0xc000c480a0) (3) Data frame handling\nI0914 13:15:06.692615 3021 log.go:181] (0xc000857290) Data frame received for 1\nI0914 13:15:06.692639 3021 log.go:181] (0xc0008b8a00) (1) Data frame handling\nI0914 13:15:06.692658 3021 log.go:181] (0xc0008b8a00) (1) Data frame sent\nI0914 13:15:06.692796 3021 log.go:181] (0xc000857290) (0xc0008b8a00) Stream removed, broadcasting: 1\nI0914 13:15:06.692814 3021 log.go:181] (0xc000857290) Go away received\nI0914 13:15:06.693101 3021 log.go:181] (0xc000857290) (0xc0008b8a00) Stream removed, broadcasting: 1\nI0914 13:15:06.693116 3021 log.go:181] (0xc000857290) (0xc000c480a0) Stream removed, broadcasting: 3\nI0914 13:15:06.693123 3021 log.go:181] (0xc000857290) (0xc000796280) Stream removed, broadcasting: 5\n" Sep 14 13:15:06.697: INFO: stdout: "" Sep 14 13:15:06.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-546 execpodlghtv -- /bin/sh -x -c nc -zv -t -w 2 10.105.162.191 80' Sep 14 13:15:06.905: INFO: stderr: "I0914 13:15:06.833952 3039 log.go:181] (0xc000a3cdc0) (0xc000300a00) Create stream\nI0914 13:15:06.834007 3039 log.go:181] (0xc000a3cdc0) (0xc000300a00) Stream added, broadcasting: 1\nI0914 13:15:06.841141 3039 log.go:181] (0xc000a3cdc0) Reply frame received for 1\nI0914 13:15:06.841184 3039 log.go:181] (0xc000a3cdc0) (0xc00092a500) Create stream\nI0914 13:15:06.841203 3039 log.go:181] (0xc000a3cdc0) (0xc00092a500) Stream added, broadcasting: 3\nI0914 13:15:06.842591 3039 log.go:181] (0xc000a3cdc0) Reply frame received for 3\nI0914 13:15:06.842643 3039 log.go:181] (0xc000a3cdc0) (0xc000300aa0) Create stream\nI0914 13:15:06.842669 3039 log.go:181] (0xc000a3cdc0) (0xc000300aa0) Stream added, broadcasting: 5\nI0914 13:15:06.843836 3039 log.go:181] (0xc000a3cdc0) Reply frame received for 5\nI0914 13:15:06.899279 3039 log.go:181] (0xc000a3cdc0) Data frame received for 3\nI0914 13:15:06.899316 3039 log.go:181] (0xc00092a500) (3) Data frame handling\nI0914 13:15:06.899338 3039 log.go:181] (0xc000a3cdc0) Data frame received for 5\nI0914 13:15:06.899348 3039 log.go:181] (0xc000300aa0) (5) Data frame handling\nI0914 13:15:06.899360 3039 log.go:181] (0xc000300aa0) (5) Data frame sent\nI0914 13:15:06.899370 3039 log.go:181] (0xc000a3cdc0) Data frame received for 5\nI0914 13:15:06.899379 3039 log.go:181] (0xc000300aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.162.191 80\nConnection to 10.105.162.191 80 port [tcp/http] succeeded!\nI0914 13:15:06.901126 3039 log.go:181] (0xc000a3cdc0) Data frame received for 1\nI0914 13:15:06.901167 3039 log.go:181] (0xc000300a00) (1) Data frame handling\nI0914 13:15:06.901265 3039 log.go:181] (0xc000300a00) (1) Data frame sent\nI0914 13:15:06.901292 3039 log.go:181] (0xc000a3cdc0) (0xc000300a00) Stream removed, broadcasting: 1\nI0914 13:15:06.901325 3039 log.go:181] (0xc000a3cdc0) Go away received\nI0914 13:15:06.901732 3039 log.go:181] (0xc000a3cdc0) (0xc000300a00) Stream removed, broadcasting: 1\nI0914 13:15:06.901757 3039 log.go:181] (0xc000a3cdc0) (0xc00092a500) Stream removed, broadcasting: 3\nI0914 13:15:06.901769 3039 log.go:181] (0xc000a3cdc0) (0xc000300aa0) Stream removed, broadcasting: 5\n" Sep 14 13:15:06.905: INFO: stdout: "" Sep 14 13:15:06.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-546 execpodlghtv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30331' Sep 14 13:15:07.109: INFO: stderr: "I0914 13:15:07.034429 3057 log.go:181] (0xc000c95550) (0xc000c8c820) Create stream\nI0914 13:15:07.034482 3057 log.go:181] (0xc000c95550) (0xc000c8c820) Stream added, broadcasting: 1\nI0914 13:15:07.039265 3057 log.go:181] (0xc000c95550) Reply frame received for 1\nI0914 13:15:07.039294 3057 log.go:181] (0xc000c95550) (0xc000c1c0a0) Create stream\nI0914 13:15:07.039303 3057 log.go:181] (0xc000c95550) (0xc000c1c0a0) Stream added, broadcasting: 3\nI0914 13:15:07.040281 3057 log.go:181] (0xc000c95550) Reply frame received for 3\nI0914 13:15:07.040318 3057 log.go:181] (0xc000c95550) (0xc000c8c000) Create stream\nI0914 13:15:07.040337 3057 log.go:181] (0xc000c95550) (0xc000c8c000) Stream added, broadcasting: 5\nI0914 13:15:07.041139 3057 log.go:181] (0xc000c95550) Reply frame received for 5\nI0914 13:15:07.102806 3057 log.go:181] (0xc000c95550) Data frame received for 5\nI0914 13:15:07.102838 3057 log.go:181] (0xc000c8c000) (5) Data frame handling\nI0914 13:15:07.102856 3057 log.go:181] (0xc000c8c000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30331\nI0914 13:15:07.103502 3057 log.go:181] (0xc000c95550) Data frame received for 5\nI0914 13:15:07.103539 3057 log.go:181] (0xc000c8c000) (5) Data frame handling\nI0914 13:15:07.103571 3057 log.go:181] (0xc000c8c000) (5) Data frame sent\nConnection to 172.18.0.15 30331 port [tcp/30331] succeeded!\nI0914 13:15:07.103751 3057 log.go:181] (0xc000c95550) Data frame received for 3\nI0914 13:15:07.103763 3057 log.go:181] (0xc000c1c0a0) (3) Data frame handling\nI0914 13:15:07.103966 3057 log.go:181] (0xc000c95550) Data frame received for 5\nI0914 13:15:07.103995 3057 log.go:181] (0xc000c8c000) (5) Data frame handling\nI0914 13:15:07.105753 3057 log.go:181] (0xc000c95550) Data frame received for 1\nI0914 13:15:07.105768 3057 log.go:181] (0xc000c8c820) (1) Data frame handling\nI0914 13:15:07.105775 3057 log.go:181] (0xc000c8c820) (1) Data frame sent\nI0914 13:15:07.105783 3057 log.go:181] (0xc000c95550) (0xc000c8c820) Stream removed, broadcasting: 1\nI0914 13:15:07.105882 3057 log.go:181] (0xc000c95550) Go away received\nI0914 13:15:07.106148 3057 log.go:181] (0xc000c95550) (0xc000c8c820) Stream removed, broadcasting: 1\nI0914 13:15:07.106161 3057 log.go:181] (0xc000c95550) (0xc000c1c0a0) Stream removed, broadcasting: 3\nI0914 13:15:07.106167 3057 log.go:181] (0xc000c95550) (0xc000c8c000) Stream removed, broadcasting: 5\n" Sep 14 13:15:07.109: INFO: stdout: "" Sep 14 13:15:07.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-546 execpodlghtv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30331' Sep 14 13:15:07.348: INFO: stderr: "I0914 13:15:07.260854 3076 log.go:181] (0xc000acafd0) (0xc0003ac320) Create stream\nI0914 13:15:07.260915 3076 log.go:181] (0xc000acafd0) (0xc0003ac320) Stream added, broadcasting: 1\nI0914 13:15:07.265795 3076 log.go:181] (0xc000acafd0) Reply frame received for 1\nI0914 13:15:07.265831 3076 log.go:181] (0xc000acafd0) (0xc00099e320) Create stream\nI0914 13:15:07.265839 3076 log.go:181] (0xc000acafd0) (0xc00099e320) Stream added, broadcasting: 3\nI0914 13:15:07.267042 3076 log.go:181] (0xc000acafd0) Reply frame received for 3\nI0914 13:15:07.267097 3076 log.go:181] (0xc000acafd0) (0xc0004340a0) Create stream\nI0914 13:15:07.267122 3076 log.go:181] (0xc000acafd0) (0xc0004340a0) Stream added, broadcasting: 5\nI0914 13:15:07.268215 3076 log.go:181] (0xc000acafd0) Reply frame received for 5\nI0914 13:15:07.341408 3076 log.go:181] (0xc000acafd0) Data frame received for 5\nI0914 13:15:07.341445 3076 log.go:181] (0xc0004340a0) (5) Data frame handling\nI0914 13:15:07.341492 3076 log.go:181] (0xc0004340a0) (5) Data frame sent\nI0914 13:15:07.341515 3076 log.go:181] (0xc000acafd0) Data frame received for 5\nI0914 13:15:07.341532 3076 log.go:181] (0xc0004340a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30331\nConnection to 172.18.0.16 30331 port [tcp/30331] succeeded!\nI0914 13:15:07.341860 3076 log.go:181] (0xc000acafd0) Data frame received for 3\nI0914 13:15:07.341889 3076 log.go:181] (0xc00099e320) (3) Data frame handling\nI0914 13:15:07.343434 3076 log.go:181] (0xc000acafd0) Data frame received for 1\nI0914 13:15:07.343457 3076 log.go:181] (0xc0003ac320) (1) Data frame handling\nI0914 13:15:07.343470 3076 log.go:181] (0xc0003ac320) (1) Data frame sent\nI0914 13:15:07.343491 3076 log.go:181] (0xc000acafd0) (0xc0003ac320) Stream removed, broadcasting: 1\nI0914 13:15:07.343526 3076 log.go:181] (0xc000acafd0) Go away received\nI0914 13:15:07.344001 3076 log.go:181] (0xc000acafd0) (0xc0003ac320) Stream removed, broadcasting: 1\nI0914 13:15:07.344026 3076 log.go:181] (0xc000acafd0) (0xc00099e320) Stream removed, broadcasting: 3\nI0914 13:15:07.344039 3076 log.go:181] (0xc000acafd0) (0xc0004340a0) Stream removed, broadcasting: 5\n" Sep 14 13:15:07.348: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:15:07.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-546" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.177 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":257,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:15:07.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 14 13:15:11.444: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9851 PodName:var-expansion-d6a43987-1dce-4a7c-8f3e-85996c6018c8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:15:11.444: INFO: >>> kubeConfig: /root/.kube/config I0914 13:15:11.478924 7 log.go:181] (0xc000954840) (0xc0014db400) Create stream I0914 13:15:11.478994 7 log.go:181] (0xc000954840) (0xc0014db400) Stream added, broadcasting: 1 I0914 13:15:11.481367 7 log.go:181] (0xc000954840) Reply frame received for 1 I0914 13:15:11.481396 7 log.go:181] (0xc000954840) (0xc00108c6e0) Create stream I0914 13:15:11.481402 7 log.go:181] (0xc000954840) (0xc00108c6e0) Stream added, broadcasting: 3 I0914 13:15:11.482293 7 log.go:181] (0xc000954840) Reply frame received for 3 I0914 13:15:11.482318 7 log.go:181] (0xc000954840) (0xc0022c0320) Create stream I0914 13:15:11.482328 7 log.go:181] (0xc000954840) (0xc0022c0320) Stream added, broadcasting: 5 I0914 13:15:11.482854 7 log.go:181] (0xc000954840) Reply frame received for 5 I0914 13:15:11.558646 7 log.go:181] (0xc000954840) Data frame received for 3 I0914 13:15:11.558671 7 log.go:181] (0xc00108c6e0) (3) Data frame handling I0914 13:15:11.558714 7 log.go:181] (0xc000954840) Data frame received for 5 I0914 13:15:11.558748 7 log.go:181] (0xc0022c0320) (5) Data frame handling I0914 13:15:11.560524 7 log.go:181] (0xc000954840) Data frame received for 1 I0914 13:15:11.560549 7 log.go:181] (0xc0014db400) (1) Data frame handling I0914 13:15:11.560578 7 log.go:181] (0xc0014db400) (1) Data frame sent I0914 13:15:11.560618 7 log.go:181] (0xc000954840) (0xc0014db400) Stream removed, broadcasting: 1 I0914 13:15:11.560753 7 log.go:181] (0xc000954840) Go away received I0914 13:15:11.560872 7 log.go:181] (0xc000954840) (0xc0014db400) Stream removed, broadcasting: 1 I0914 13:15:11.560980 7 log.go:181] (0xc000954840) (0xc00108c6e0) Stream removed, broadcasting: 3 I0914 13:15:11.561011 7 log.go:181] (0xc000954840) (0xc0022c0320) Stream removed, broadcasting: 5 STEP: test for file in mounted path Sep 14 13:15:11.564: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9851 PodName:var-expansion-d6a43987-1dce-4a7c-8f3e-85996c6018c8 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 14 13:15:11.564: INFO: >>> kubeConfig: /root/.kube/config I0914 13:15:11.595460 7 log.go:181] (0xc00346e4d0) (0xc00395e140) Create stream I0914 13:15:11.595479 7 log.go:181] (0xc00346e4d0) (0xc00395e140) Stream added, broadcasting: 1 I0914 13:15:11.597391 7 log.go:181] (0xc00346e4d0) Reply frame received for 1 I0914 13:15:11.597436 7 log.go:181] (0xc00346e4d0) (0xc0014190e0) Create stream I0914 13:15:11.597452 7 log.go:181] (0xc00346e4d0) (0xc0014190e0) Stream added, broadcasting: 3 I0914 13:15:11.598465 7 log.go:181] (0xc00346e4d0) Reply frame received for 3 I0914 13:15:11.598514 7 log.go:181] (0xc00346e4d0) (0xc00395e1e0) Create stream I0914 13:15:11.598532 7 log.go:181] (0xc00346e4d0) (0xc00395e1e0) Stream added, broadcasting: 5 I0914 13:15:11.599489 7 log.go:181] (0xc00346e4d0) Reply frame received for 5 I0914 13:15:11.670163 7 log.go:181] (0xc00346e4d0) Data frame received for 5 I0914 13:15:11.670202 7 log.go:181] (0xc00395e1e0) (5) Data frame handling I0914 13:15:11.670225 7 log.go:181] (0xc00346e4d0) Data frame received for 3 I0914 13:15:11.670239 7 log.go:181] (0xc0014190e0) (3) Data frame handling I0914 13:15:11.671403 7 log.go:181] (0xc00346e4d0) Data frame received for 1 I0914 13:15:11.671421 7 log.go:181] (0xc00395e140) (1) Data frame handling I0914 13:15:11.671432 7 log.go:181] (0xc00395e140) (1) Data frame sent I0914 13:15:11.671446 7 log.go:181] (0xc00346e4d0) (0xc00395e140) Stream removed, broadcasting: 1 I0914 13:15:11.671466 7 log.go:181] (0xc00346e4d0) Go away received I0914 13:15:11.671517 7 log.go:181] (0xc00346e4d0) (0xc00395e140) Stream removed, broadcasting: 1 I0914 13:15:11.671526 7 log.go:181] (0xc00346e4d0) (0xc0014190e0) Stream removed, broadcasting: 3 I0914 13:15:11.671536 7 log.go:181] (0xc00346e4d0) (0xc00395e1e0) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 14 13:15:12.181: INFO: Successfully updated pod "var-expansion-d6a43987-1dce-4a7c-8f3e-85996c6018c8" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 14 13:15:12.226: INFO: Deleting pod "var-expansion-d6a43987-1dce-4a7c-8f3e-85996c6018c8" in namespace "var-expansion-9851" Sep 14 13:15:12.231: INFO: Wait up to 5m0s for pod "var-expansion-d6a43987-1dce-4a7c-8f3e-85996c6018c8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:15:46.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9851" for this suite. • [SLOW TEST:38.952 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":258,"skipped":4196,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:15:46.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e17af09a-53f6-4aae-8f06-b970fd6fb297 STEP: Creating a pod to test consume configMaps Sep 14 13:15:46.393: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e" in namespace "configmap-1608" to be "Succeeded or Failed" Sep 14 13:15:46.410: INFO: Pod "pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.711373ms Sep 14 13:15:48.415: INFO: Pod "pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022472477s Sep 14 13:15:51.399: INFO: Pod "pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.006058652s STEP: Saw pod success Sep 14 13:15:51.399: INFO: Pod "pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e" satisfied condition "Succeeded or Failed" Sep 14 13:15:51.403: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e container configmap-volume-test: STEP: delete the pod Sep 14 13:15:51.447: INFO: Waiting for pod pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e to disappear Sep 14 13:15:51.453: INFO: Pod pod-configmaps-5d90f0b7-2dc9-49fc-b7c1-8ad175e65c3e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:15:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1608" for this suite. • [SLOW TEST:5.150 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:15:51.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:15:51.809: INFO: Checking APIGroup: apiregistration.k8s.io Sep 14 13:15:51.810: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 14 13:15:51.810: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.810: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 14 13:15:51.810: INFO: Checking APIGroup: extensions Sep 14 13:15:51.810: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 14 13:15:51.810: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 14 13:15:51.810: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 14 13:15:51.810: INFO: Checking APIGroup: apps Sep 14 13:15:51.812: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 14 13:15:51.812: INFO: Versions found [{apps/v1 v1}] Sep 14 13:15:51.812: INFO: apps/v1 matches apps/v1 Sep 14 13:15:51.812: INFO: Checking APIGroup: events.k8s.io Sep 14 13:15:51.813: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 14 13:15:51.813: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.813: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 14 13:15:51.813: INFO: Checking APIGroup: authentication.k8s.io Sep 14 13:15:51.814: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 14 13:15:51.814: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.814: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 14 13:15:51.814: INFO: Checking APIGroup: authorization.k8s.io Sep 14 13:15:51.815: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 14 13:15:51.815: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.815: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 14 13:15:51.815: INFO: Checking APIGroup: autoscaling Sep 14 13:15:51.816: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 14 13:15:51.816: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 14 13:15:51.816: INFO: autoscaling/v1 matches autoscaling/v1 Sep 14 13:15:51.816: INFO: Checking APIGroup: batch Sep 14 13:15:51.817: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 14 13:15:51.817: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 14 13:15:51.817: INFO: batch/v1 matches batch/v1 Sep 14 13:15:51.817: INFO: Checking APIGroup: certificates.k8s.io Sep 14 13:15:51.818: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 14 13:15:51.818: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.818: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 14 13:15:51.818: INFO: Checking APIGroup: networking.k8s.io Sep 14 13:15:51.819: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 14 13:15:51.819: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.819: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 14 13:15:51.819: INFO: Checking APIGroup: policy Sep 14 13:15:51.820: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 14 13:15:51.820: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 14 13:15:51.820: INFO: policy/v1beta1 matches policy/v1beta1 Sep 14 13:15:51.820: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 14 13:15:51.821: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 14 13:15:51.821: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.821: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 14 13:15:51.821: INFO: Checking APIGroup: storage.k8s.io Sep 14 13:15:51.822: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 14 13:15:51.822: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.823: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 14 13:15:51.823: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 14 13:15:51.824: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 14 13:15:51.824: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.824: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 14 13:15:51.824: INFO: Checking APIGroup: apiextensions.k8s.io Sep 14 13:15:51.825: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 14 13:15:51.825: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.825: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 14 13:15:51.825: INFO: Checking APIGroup: scheduling.k8s.io Sep 14 13:15:51.826: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 14 13:15:51.826: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.826: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 14 13:15:51.826: INFO: Checking APIGroup: coordination.k8s.io Sep 14 13:15:51.827: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 14 13:15:51.827: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.827: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 14 13:15:51.827: INFO: Checking APIGroup: node.k8s.io Sep 14 13:15:51.828: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 14 13:15:51.828: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.828: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 14 13:15:51.828: INFO: Checking APIGroup: discovery.k8s.io Sep 14 13:15:51.829: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 14 13:15:51.829: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 14 13:15:51.829: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:15:51.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7292" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":260,"skipped":4238,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:15:51.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9e28e46f-7e34-4eeb-8935-c4cbba674726 STEP: Creating a pod to test consume configMaps Sep 14 13:15:51.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514" in namespace "projected-7570" to be "Succeeded or Failed" Sep 14 13:15:51.951: INFO: Pod "pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514": Phase="Pending", Reason="", readiness=false. Elapsed: 31.570869ms Sep 14 13:15:53.985: INFO: Pod "pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065575544s Sep 14 13:15:55.990: INFO: Pod "pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070637744s STEP: Saw pod success Sep 14 13:15:55.990: INFO: Pod "pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514" satisfied condition "Succeeded or Failed" Sep 14 13:15:55.993: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514 container projected-configmap-volume-test: STEP: delete the pod Sep 14 13:15:56.030: INFO: Waiting for pod pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514 to disappear Sep 14 13:15:56.043: INFO: Pod pod-projected-configmaps-c5155969-8c37-47f5-b872-61ac3fa87514 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:15:56.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7570" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4239,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:15:56.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:16:00.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9778" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4246,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:16:00.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1506 Sep 14 13:16:04.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 14 13:16:04.580: INFO: stderr: "I0914 13:16:04.470668 3094 log.go:181] (0xc0009f0fd0) (0xc0005020a0) Create stream\nI0914 13:16:04.470711 3094 log.go:181] (0xc0009f0fd0) (0xc0005020a0) Stream added, broadcasting: 1\nI0914 13:16:04.474490 3094 log.go:181] (0xc0009f0fd0) Reply frame received for 1\nI0914 13:16:04.474544 3094 log.go:181] (0xc0009f0fd0) (0xc0003988c0) Create stream\nI0914 13:16:04.474560 3094 log.go:181] (0xc0009f0fd0) (0xc0003988c0) Stream added, broadcasting: 3\nI0914 13:16:04.475616 3094 log.go:181] (0xc0009f0fd0) Reply frame received for 3\nI0914 13:16:04.475651 3094 log.go:181] (0xc0009f0fd0) (0xc000c1c000) Create stream\nI0914 13:16:04.475663 3094 log.go:181] (0xc0009f0fd0) (0xc000c1c000) Stream added, broadcasting: 5\nI0914 13:16:04.476743 3094 log.go:181] (0xc0009f0fd0) Reply frame received for 5\nI0914 13:16:04.568615 3094 log.go:181] (0xc0009f0fd0) Data frame received for 5\nI0914 13:16:04.568644 3094 log.go:181] (0xc000c1c000) (5) Data frame handling\nI0914 13:16:04.568666 3094 log.go:181] (0xc000c1c000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0914 13:16:04.572492 3094 log.go:181] (0xc0009f0fd0) Data frame received for 3\nI0914 13:16:04.572507 3094 log.go:181] (0xc0003988c0) (3) Data frame handling\nI0914 13:16:04.572521 3094 log.go:181] (0xc0003988c0) (3) Data frame sent\nI0914 13:16:04.573059 3094 log.go:181] (0xc0009f0fd0) Data frame received for 5\nI0914 13:16:04.573078 3094 log.go:181] (0xc000c1c000) (5) Data frame handling\nI0914 13:16:04.573313 3094 log.go:181] (0xc0009f0fd0) Data frame received for 3\nI0914 13:16:04.573333 3094 log.go:181] (0xc0003988c0) (3) Data frame handling\nI0914 13:16:04.575109 3094 log.go:181] (0xc0009f0fd0) Data frame received for 1\nI0914 13:16:04.575128 3094 log.go:181] (0xc0005020a0) (1) Data frame handling\nI0914 13:16:04.575143 3094 log.go:181] (0xc0005020a0) (1) Data frame sent\nI0914 13:16:04.575168 3094 log.go:181] (0xc0009f0fd0) (0xc0005020a0) Stream removed, broadcasting: 1\nI0914 13:16:04.575272 3094 log.go:181] (0xc0009f0fd0) Go away received\nI0914 13:16:04.575499 3094 log.go:181] (0xc0009f0fd0) (0xc0005020a0) Stream removed, broadcasting: 1\nI0914 13:16:04.575519 3094 log.go:181] (0xc0009f0fd0) (0xc0003988c0) Stream removed, broadcasting: 3\nI0914 13:16:04.575531 3094 log.go:181] (0xc0009f0fd0) (0xc000c1c000) Stream removed, broadcasting: 5\n" Sep 14 13:16:04.580: INFO: stdout: "iptables" Sep 14 13:16:04.580: INFO: proxyMode: iptables Sep 14 13:16:04.586: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:04.590: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:06.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:06.595: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:08.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:08.596: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:10.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:10.595: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:12.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:12.596: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:14.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:14.594: INFO: Pod kube-proxy-mode-detector still exists Sep 14 13:16:16.591: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 14 13:16:16.594: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1506 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1506 I0914 13:16:16.632183 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1506, replica count: 3 I0914 13:16:19.682660 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:16:22.682929 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:16:22.690: INFO: Creating new exec pod Sep 14 13:16:27.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 execpod-affinityzpnkd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 14 13:16:27.958: INFO: stderr: "I0914 13:16:27.863953 3113 log.go:181] (0xc00003a0b0) (0xc000840140) Create stream\nI0914 13:16:27.864022 3113 log.go:181] (0xc00003a0b0) (0xc000840140) Stream added, broadcasting: 1\nI0914 13:16:27.865936 3113 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0914 13:16:27.865969 3113 log.go:181] (0xc00003a0b0) (0xc000e98000) Create stream\nI0914 13:16:27.865978 3113 log.go:181] (0xc00003a0b0) (0xc000e98000) Stream added, broadcasting: 3\nI0914 13:16:27.866834 3113 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0914 13:16:27.866879 3113 log.go:181] (0xc00003a0b0) (0xc000e980a0) Create stream\nI0914 13:16:27.866893 3113 log.go:181] (0xc00003a0b0) (0xc000e980a0) Stream added, broadcasting: 5\nI0914 13:16:27.867706 3113 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0914 13:16:27.951409 3113 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:16:27.951440 3113 log.go:181] (0xc000e980a0) (5) Data frame handling\nI0914 13:16:27.951464 3113 log.go:181] (0xc000e980a0) (5) Data frame sent\nI0914 13:16:27.951474 3113 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0914 13:16:27.951483 3113 log.go:181] (0xc000e980a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0914 13:16:27.951542 3113 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0914 13:16:27.951554 3113 log.go:181] (0xc000e98000) (3) Data frame handling\nI0914 13:16:27.953538 3113 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0914 13:16:27.953556 3113 log.go:181] (0xc000840140) (1) Data frame handling\nI0914 13:16:27.953566 3113 log.go:181] (0xc000840140) (1) Data frame sent\nI0914 13:16:27.953580 3113 log.go:181] (0xc00003a0b0) (0xc000840140) Stream removed, broadcasting: 1\nI0914 13:16:27.953692 3113 log.go:181] (0xc00003a0b0) Go away received\nI0914 13:16:27.953974 3113 log.go:181] (0xc00003a0b0) (0xc000840140) Stream removed, broadcasting: 1\nI0914 13:16:27.953990 3113 log.go:181] (0xc00003a0b0) (0xc000e98000) Stream removed, broadcasting: 3\nI0914 13:16:27.953999 3113 log.go:181] (0xc00003a0b0) (0xc000e980a0) Stream removed, broadcasting: 5\n" Sep 14 13:16:27.958: INFO: stdout: "" Sep 14 13:16:27.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 execpod-affinityzpnkd -- /bin/sh -x -c nc -zv -t -w 2 10.100.59.64 80' Sep 14 13:16:28.181: INFO: stderr: "I0914 13:16:28.098520 3132 log.go:181] (0xc000e1e2c0) (0xc000c82a00) Create stream\nI0914 13:16:28.098590 3132 log.go:181] (0xc000e1e2c0) (0xc000c82a00) Stream added, broadcasting: 1\nI0914 13:16:28.105019 3132 log.go:181] (0xc000e1e2c0) Reply frame received for 1\nI0914 13:16:28.105186 3132 log.go:181] (0xc000e1e2c0) (0xc000c82000) Create stream\nI0914 13:16:28.105274 3132 log.go:181] (0xc000e1e2c0) (0xc000c82000) Stream added, broadcasting: 3\nI0914 13:16:28.107444 3132 log.go:181] (0xc000e1e2c0) Reply frame received for 3\nI0914 13:16:28.107485 3132 log.go:181] (0xc000e1e2c0) (0xc0007961e0) Create stream\nI0914 13:16:28.107498 3132 log.go:181] (0xc000e1e2c0) (0xc0007961e0) Stream added, broadcasting: 5\nI0914 13:16:28.108946 3132 log.go:181] (0xc000e1e2c0) Reply frame received for 5\nI0914 13:16:28.174965 3132 log.go:181] (0xc000e1e2c0) Data frame received for 3\nI0914 13:16:28.175029 3132 log.go:181] (0xc000c82000) (3) Data frame handling\nI0914 13:16:28.175069 3132 log.go:181] (0xc000e1e2c0) Data frame received for 5\nI0914 13:16:28.175095 3132 log.go:181] (0xc0007961e0) (5) Data frame handling\nI0914 13:16:28.175130 3132 log.go:181] (0xc0007961e0) (5) Data frame sent\nI0914 13:16:28.175152 3132 log.go:181] (0xc000e1e2c0) Data frame received for 5\n+ nc -zv -t -w 2 10.100.59.64 80\nConnection to 10.100.59.64 80 port [tcp/http] succeeded!\nI0914 13:16:28.175176 3132 log.go:181] (0xc0007961e0) (5) Data frame handling\nI0914 13:16:28.176888 3132 log.go:181] (0xc000e1e2c0) Data frame received for 1\nI0914 13:16:28.176905 3132 log.go:181] (0xc000c82a00) (1) Data frame handling\nI0914 13:16:28.176923 3132 log.go:181] (0xc000c82a00) (1) Data frame sent\nI0914 13:16:28.176938 3132 log.go:181] (0xc000e1e2c0) (0xc000c82a00) Stream removed, broadcasting: 1\nI0914 13:16:28.176951 3132 log.go:181] (0xc000e1e2c0) Go away received\nI0914 13:16:28.177532 3132 log.go:181] (0xc000e1e2c0) (0xc000c82a00) Stream removed, broadcasting: 1\nI0914 13:16:28.177563 3132 log.go:181] (0xc000e1e2c0) (0xc000c82000) Stream removed, broadcasting: 3\nI0914 13:16:28.177576 3132 log.go:181] (0xc000e1e2c0) (0xc0007961e0) Stream removed, broadcasting: 5\n" Sep 14 13:16:28.181: INFO: stdout: "" Sep 14 13:16:28.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 execpod-affinityzpnkd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.59.64:80/ ; done' Sep 14 13:16:28.489: INFO: stderr: "I0914 13:16:28.313055 3150 log.go:181] (0xc000a4cf20) (0xc0006375e0) Create stream\nI0914 13:16:28.313118 3150 log.go:181] (0xc000a4cf20) (0xc0006375e0) Stream added, broadcasting: 1\nI0914 13:16:28.317708 3150 log.go:181] (0xc000a4cf20) Reply frame received for 1\nI0914 13:16:28.317744 3150 log.go:181] (0xc000a4cf20) (0xc0009a1040) Create stream\nI0914 13:16:28.317754 3150 log.go:181] (0xc000a4cf20) (0xc0009a1040) Stream added, broadcasting: 3\nI0914 13:16:28.318583 3150 log.go:181] (0xc000a4cf20) Reply frame received for 3\nI0914 13:16:28.318631 3150 log.go:181] (0xc000a4cf20) (0xc000636000) Create stream\nI0914 13:16:28.318645 3150 log.go:181] (0xc000a4cf20) (0xc000636000) Stream added, broadcasting: 5\nI0914 13:16:28.319392 3150 log.go:181] (0xc000a4cf20) Reply frame received for 5\nI0914 13:16:28.389696 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.389744 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.389769 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.389797 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.389812 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.389840 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.393294 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.393307 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.393316 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.394115 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.394134 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.394145 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.394159 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.394168 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.394176 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.401671 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.401699 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.401738 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.401944 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.401958 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.401968 3150 log.go:181] (0xc000636000) (5) Data frame sent\nI0914 13:16:28.401978 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.401986 3150 log.go:181] (0xc000636000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.402011 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.402047 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.402063 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.402088 3150 log.go:181] (0xc000636000) (5) Data frame sent\nI0914 13:16:28.407638 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.407661 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.407684 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.408494 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.408525 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.408541 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.408563 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.408577 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.408590 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.412421 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.412449 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.412475 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.413460 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.413481 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.413495 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.413507 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.413530 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.413553 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.418053 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.418078 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.418097 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.418538 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.418578 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.418595 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.418616 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.418630 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.418655 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.425374 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.425397 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.425416 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.426017 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.426039 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.426052 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.426073 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.426083 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.426094 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.429282 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.429294 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.429300 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.429975 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.430004 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.430018 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.430042 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.430055 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.430066 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.437219 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.437259 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.437289 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.437669 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.437697 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.437711 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.437732 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.437743 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.437754 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.444612 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.444638 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.444658 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.445279 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.445308 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.445322 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.445341 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.445352 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.445363 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.448567 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.448586 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.448596 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.448803 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.448819 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.448831 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.448843 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.448883 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.448903 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.452491 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.452506 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.452514 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.453448 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.453467 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.453486 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.453508 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.453523 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.453539 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.458418 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.458444 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.458472 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.458880 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.458901 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.458928 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.458941 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.458961 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.458980 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.462969 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.462994 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.463008 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.463511 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.463531 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.463543 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -sI0914 13:16:28.463580 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.463604 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.463626 3150 log.go:181] (0xc000636000) (5) Data frame sent\n --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.463664 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.463691 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.463706 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.471016 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.471041 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.471061 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.471801 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.471846 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.471863 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.471885 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.471903 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.471932 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.476911 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.476942 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.476966 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.477683 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.477710 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.477723 3150 log.go:181] (0xc000636000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.477742 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.477764 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.477789 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.482844 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.482857 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.482868 3150 log.go:181] (0xc0009a1040) (3) Data frame sent\nI0914 13:16:28.483712 3150 log.go:181] (0xc000a4cf20) Data frame received for 3\nI0914 13:16:28.483736 3150 log.go:181] (0xc0009a1040) (3) Data frame handling\nI0914 13:16:28.483760 3150 log.go:181] (0xc000a4cf20) Data frame received for 5\nI0914 13:16:28.483795 3150 log.go:181] (0xc000636000) (5) Data frame handling\nI0914 13:16:28.485662 3150 log.go:181] (0xc000a4cf20) Data frame received for 1\nI0914 13:16:28.485681 3150 log.go:181] (0xc0006375e0) (1) Data frame handling\nI0914 13:16:28.485692 3150 log.go:181] (0xc0006375e0) (1) Data frame sent\nI0914 13:16:28.485704 3150 log.go:181] (0xc000a4cf20) (0xc0006375e0) Stream removed, broadcasting: 1\nI0914 13:16:28.485792 3150 log.go:181] (0xc000a4cf20) Go away received\nI0914 13:16:28.486073 3150 log.go:181] (0xc000a4cf20) (0xc0006375e0) Stream removed, broadcasting: 1\nI0914 13:16:28.486104 3150 log.go:181] (0xc000a4cf20) (0xc0009a1040) Stream removed, broadcasting: 3\nI0914 13:16:28.486116 3150 log.go:181] (0xc000a4cf20) (0xc000636000) Stream removed, broadcasting: 5\n" Sep 14 13:16:28.490: INFO: stdout: "\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9\naffinity-clusterip-timeout-25ft9" Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Received response from host: affinity-clusterip-timeout-25ft9 Sep 14 13:16:28.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 execpod-affinityzpnkd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.100.59.64:80/' Sep 14 13:16:28.691: INFO: stderr: "I0914 13:16:28.618230 3168 log.go:181] (0xc000142370) (0xc000ad05a0) Create stream\nI0914 13:16:28.618285 3168 log.go:181] (0xc000142370) (0xc000ad05a0) Stream added, broadcasting: 1\nI0914 13:16:28.620261 3168 log.go:181] (0xc000142370) Reply frame received for 1\nI0914 13:16:28.620305 3168 log.go:181] (0xc000142370) (0xc000caa1e0) Create stream\nI0914 13:16:28.620315 3168 log.go:181] (0xc000142370) (0xc000caa1e0) Stream added, broadcasting: 3\nI0914 13:16:28.621270 3168 log.go:181] (0xc000142370) Reply frame received for 3\nI0914 13:16:28.621310 3168 log.go:181] (0xc000142370) (0xc000ad0be0) Create stream\nI0914 13:16:28.621323 3168 log.go:181] (0xc000142370) (0xc000ad0be0) Stream added, broadcasting: 5\nI0914 13:16:28.622463 3168 log.go:181] (0xc000142370) Reply frame received for 5\nI0914 13:16:28.678086 3168 log.go:181] (0xc000142370) Data frame received for 5\nI0914 13:16:28.678118 3168 log.go:181] (0xc000ad0be0) (5) Data frame handling\nI0914 13:16:28.678147 3168 log.go:181] (0xc000ad0be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:28.684302 3168 log.go:181] (0xc000142370) Data frame received for 3\nI0914 13:16:28.684320 3168 log.go:181] (0xc000caa1e0) (3) Data frame handling\nI0914 13:16:28.684330 3168 log.go:181] (0xc000caa1e0) (3) Data frame sent\nI0914 13:16:28.684663 3168 log.go:181] (0xc000142370) Data frame received for 5\nI0914 13:16:28.684685 3168 log.go:181] (0xc000142370) Data frame received for 3\nI0914 13:16:28.684703 3168 log.go:181] (0xc000caa1e0) (3) Data frame handling\nI0914 13:16:28.684718 3168 log.go:181] (0xc000ad0be0) (5) Data frame handling\nI0914 13:16:28.686459 3168 log.go:181] (0xc000142370) Data frame received for 1\nI0914 13:16:28.686472 3168 log.go:181] (0xc000ad05a0) (1) Data frame handling\nI0914 13:16:28.686484 3168 log.go:181] (0xc000ad05a0) (1) Data frame sent\nI0914 13:16:28.686492 3168 log.go:181] (0xc000142370) (0xc000ad05a0) Stream removed, broadcasting: 1\nI0914 13:16:28.686711 3168 log.go:181] (0xc000142370) Go away received\nI0914 13:16:28.686753 3168 log.go:181] (0xc000142370) (0xc000ad05a0) Stream removed, broadcasting: 1\nI0914 13:16:28.686767 3168 log.go:181] (0xc000142370) (0xc000caa1e0) Stream removed, broadcasting: 3\nI0914 13:16:28.686773 3168 log.go:181] (0xc000142370) (0xc000ad0be0) Stream removed, broadcasting: 5\n" Sep 14 13:16:28.691: INFO: stdout: "affinity-clusterip-timeout-25ft9" Sep 14 13:16:43.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-1506 execpod-affinityzpnkd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.100.59.64:80/' Sep 14 13:16:43.933: INFO: stderr: "I0914 13:16:43.829625 3186 log.go:181] (0xc00018ce70) (0xc000c868c0) Create stream\nI0914 13:16:43.829679 3186 log.go:181] (0xc00018ce70) (0xc000c868c0) Stream added, broadcasting: 1\nI0914 13:16:43.834763 3186 log.go:181] (0xc00018ce70) Reply frame received for 1\nI0914 13:16:43.834995 3186 log.go:181] (0xc00018ce70) (0xc000542460) Create stream\nI0914 13:16:43.835035 3186 log.go:181] (0xc00018ce70) (0xc000542460) Stream added, broadcasting: 3\nI0914 13:16:43.836999 3186 log.go:181] (0xc00018ce70) Reply frame received for 3\nI0914 13:16:43.837028 3186 log.go:181] (0xc00018ce70) (0xc000c86000) Create stream\nI0914 13:16:43.837035 3186 log.go:181] (0xc00018ce70) (0xc000c86000) Stream added, broadcasting: 5\nI0914 13:16:43.838215 3186 log.go:181] (0xc00018ce70) Reply frame received for 5\nI0914 13:16:43.923459 3186 log.go:181] (0xc00018ce70) Data frame received for 5\nI0914 13:16:43.923489 3186 log.go:181] (0xc000c86000) (5) Data frame handling\nI0914 13:16:43.923506 3186 log.go:181] (0xc000c86000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.59.64:80/\nI0914 13:16:43.925711 3186 log.go:181] (0xc00018ce70) Data frame received for 3\nI0914 13:16:43.925743 3186 log.go:181] (0xc000542460) (3) Data frame handling\nI0914 13:16:43.925778 3186 log.go:181] (0xc000542460) (3) Data frame sent\nI0914 13:16:43.926371 3186 log.go:181] (0xc00018ce70) Data frame received for 3\nI0914 13:16:43.926383 3186 log.go:181] (0xc000542460) (3) Data frame handling\nI0914 13:16:43.926871 3186 log.go:181] (0xc00018ce70) Data frame received for 5\nI0914 13:16:43.926899 3186 log.go:181] (0xc000c86000) (5) Data frame handling\nI0914 13:16:43.928539 3186 log.go:181] (0xc00018ce70) Data frame received for 1\nI0914 13:16:43.928578 3186 log.go:181] (0xc000c868c0) (1) Data frame handling\nI0914 13:16:43.928602 3186 log.go:181] (0xc000c868c0) (1) Data frame sent\nI0914 13:16:43.928629 3186 log.go:181] (0xc00018ce70) (0xc000c868c0) Stream removed, broadcasting: 1\nI0914 13:16:43.928672 3186 log.go:181] (0xc00018ce70) Go away received\nI0914 13:16:43.929140 3186 log.go:181] (0xc00018ce70) (0xc000c868c0) Stream removed, broadcasting: 1\nI0914 13:16:43.929165 3186 log.go:181] (0xc00018ce70) (0xc000542460) Stream removed, broadcasting: 3\nI0914 13:16:43.929179 3186 log.go:181] (0xc00018ce70) (0xc000c86000) Stream removed, broadcasting: 5\n" Sep 14 13:16:43.934: INFO: stdout: "affinity-clusterip-timeout-4fw4v" Sep 14 13:16:43.934: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1506, will wait for the garbage collector to delete the pods Sep 14 13:16:44.123: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 91.227466ms Sep 14 13:16:44.623: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.225969ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:16:56.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1506" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:55.852 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":263,"skipped":4249,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:16:56.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:16:56.186: INFO: Creating deployment "test-recreate-deployment" Sep 14 13:16:56.195: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 14 13:16:56.207: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 14 13:16:58.215: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 14 13:16:58.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:17:00.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686216, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:17:02.227: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 14 13:17:02.280: INFO: Updating deployment test-recreate-deployment Sep 14 13:17:02.280: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 14 13:17:03.067: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1271 /apis/apps/v1/namespaces/deployment-1271/deployments/test-recreate-deployment 5d13938d-b151-4418-b29d-7c0c4a575228 281906 2 2020-09-14 13:16:56 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-14 13:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-14 13:17:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042bbaa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-14 13:17:02 +0000 UTC,LastTransitionTime:2020-09-14 13:17:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-14 13:17:03 +0000 UTC,LastTransitionTime:2020-09-14 13:16:56 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 14 13:17:03.071: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-1271 /apis/apps/v1/namespaces/deployment-1271/replicasets/test-recreate-deployment-f79dd4667 9cf8c8e2-7467-40bc-a306-f09c8fa5f649 281905 1 2020-09-14 13:17:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5d13938d-b151-4418-b29d-7c0c4a575228 0xc0042bbf80 0xc0042bbf81}] [] [{kube-controller-manager Update apps/v1 2020-09-14 13:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5d13938d-b151-4418-b29d-7c0c4a575228\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042bbff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 13:17:03.071: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 14 13:17:03.071: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-1271 /apis/apps/v1/namespaces/deployment-1271/replicasets/test-recreate-deployment-c96cf48f 62d14e1b-e573-4aa8-bdcd-aa1aa7187f98 281894 2 2020-09-14 13:16:56 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5d13938d-b151-4418-b29d-7c0c4a575228 0xc0042bbe8f 0xc0042bbea0}] [] [{kube-controller-manager Update apps/v1 2020-09-14 13:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5d13938d-b151-4418-b29d-7c0c4a575228\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042bbf18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 14 13:17:03.075: INFO: Pod "test-recreate-deployment-f79dd4667-crpft" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-crpft test-recreate-deployment-f79dd4667- deployment-1271 /api/v1/namespaces/deployment-1271/pods/test-recreate-deployment-f79dd4667-crpft ef0cab02-1672-4d33-8748-04341bab37d5 281904 0 2020-09-14 13:17:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 9cf8c8e2-7467-40bc-a306-f09c8fa5f649 0xc0069c0430 0xc0069c0431}] [] [{kube-controller-manager Update v1 2020-09-14 13:17:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9cf8c8e2-7467-40bc-a306-f09c8fa5f649\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 13:17:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghntn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghntn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghntn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:17:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:17:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:17:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:17:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-09-14 13:17:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:03.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1271" for this suite. • [SLOW TEST:7.021 seconds] [sig-apps] Deployment /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":264,"skipped":4249,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:03.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 14 13:17:03.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config cluster-info' Sep 14 13:17:03.248: INFO: stderr: "" Sep 14 13:17:03.248: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42909\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42909/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:03.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-641" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":265,"skipped":4251,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:03.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:03.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4526" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":266,"skipped":4266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:03.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5c0b176e-f8f8-4e51-bcb3-7ae0f64e8599 STEP: Creating a pod to test consume secrets Sep 14 13:17:03.954: INFO: Waiting up to 5m0s for pod "pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85" in namespace "secrets-6776" to be "Succeeded or Failed" Sep 14 13:17:03.978: INFO: Pod "pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85": Phase="Pending", Reason="", readiness=false. Elapsed: 24.497163ms Sep 14 13:17:05.983: INFO: Pod "pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028986902s Sep 14 13:17:07.990: INFO: Pod "pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036617202s STEP: Saw pod success Sep 14 13:17:07.990: INFO: Pod "pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85" satisfied condition "Succeeded or Failed" Sep 14 13:17:07.993: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85 container secret-volume-test: STEP: delete the pod Sep 14 13:17:08.031: INFO: Waiting for pod pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85 to disappear Sep 14 13:17:08.058: INFO: Pod pod-secrets-99948637-7df0-4666-8fdc-2e070c1fcb85 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:08.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6776" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4319,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:08.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:12.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2460" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:12.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 14 13:17:12.550: INFO: Waiting up to 5m0s for pod "pod-a19b9335-e857-476f-83a0-4cafb5526d0b" in namespace "emptydir-7952" to be "Succeeded or Failed" Sep 14 13:17:12.591: INFO: Pod "pod-a19b9335-e857-476f-83a0-4cafb5526d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.116267ms Sep 14 13:17:14.595: INFO: Pod "pod-a19b9335-e857-476f-83a0-4cafb5526d0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045156936s Sep 14 13:17:16.600: INFO: Pod "pod-a19b9335-e857-476f-83a0-4cafb5526d0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049720827s STEP: Saw pod success Sep 14 13:17:16.600: INFO: Pod "pod-a19b9335-e857-476f-83a0-4cafb5526d0b" satisfied condition "Succeeded or Failed" Sep 14 13:17:16.603: INFO: Trying to get logs from node latest-worker2 pod pod-a19b9335-e857-476f-83a0-4cafb5526d0b container test-container: STEP: delete the pod Sep 14 13:17:16.719: INFO: Waiting for pod pod-a19b9335-e857-476f-83a0-4cafb5526d0b to disappear Sep 14 13:17:16.753: INFO: Pod pod-a19b9335-e857-476f-83a0-4cafb5526d0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:16.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7952" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:16.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 14 13:17:21.425: INFO: Successfully updated pod "annotationupdate7c1b22f5-e1bb-48c1-8bc4-e8ad9fb1d7bf" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:23.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8288" for this suite. • [SLOW TEST:6.691 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:23.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 14 13:17:23.551: INFO: Waiting up to 5m0s for pod "var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18" in namespace "var-expansion-8034" to be "Succeeded or Failed" Sep 14 13:17:23.556: INFO: Pod "var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18": Phase="Pending", Reason="", readiness=false. Elapsed: 5.363187ms Sep 14 13:17:25.560: INFO: Pod "var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0094846s Sep 14 13:17:27.564: INFO: Pod "var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01298948s STEP: Saw pod success Sep 14 13:17:27.564: INFO: Pod "var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18" satisfied condition "Succeeded or Failed" Sep 14 13:17:27.567: INFO: Trying to get logs from node latest-worker2 pod var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18 container dapi-container: STEP: delete the pod Sep 14 13:17:27.597: INFO: Waiting for pod var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18 to disappear Sep 14 13:17:27.605: INFO: Pod var-expansion-ff69b231-dce2-4a22-a8d4-f35d0c89fc18 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:27.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8034" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:27.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3231 STEP: creating service affinity-clusterip-transition in namespace services-3231 STEP: creating replication controller affinity-clusterip-transition in namespace services-3231 I0914 13:17:27.701203 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3231, replica count: 3 I0914 13:17:30.751604 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:17:33.751841 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:17:36.752111 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:17:36.758: INFO: Creating new exec pod Sep 14 13:17:41.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3231 execpod-affinitylnnzs -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 14 13:17:42.045: INFO: stderr: "I0914 13:17:41.941260 3222 log.go:181] (0xc0002794a0) (0xc0005ac500) Create stream\nI0914 13:17:41.941310 3222 log.go:181] (0xc0002794a0) (0xc0005ac500) Stream added, broadcasting: 1\nI0914 13:17:41.943240 3222 log.go:181] (0xc0002794a0) Reply frame received for 1\nI0914 13:17:41.943290 3222 log.go:181] (0xc0002794a0) (0xc0006a61e0) Create stream\nI0914 13:17:41.943308 3222 log.go:181] (0xc0002794a0) (0xc0006a61e0) Stream added, broadcasting: 3\nI0914 13:17:41.944534 3222 log.go:181] (0xc0002794a0) Reply frame received for 3\nI0914 13:17:41.944562 3222 log.go:181] (0xc0002794a0) (0xc0006a6780) Create stream\nI0914 13:17:41.944571 3222 log.go:181] (0xc0002794a0) (0xc0006a6780) Stream added, broadcasting: 5\nI0914 13:17:41.945424 3222 log.go:181] (0xc0002794a0) Reply frame received for 5\nI0914 13:17:42.040047 3222 log.go:181] (0xc0002794a0) Data frame received for 5\nI0914 13:17:42.040079 3222 log.go:181] (0xc0006a6780) (5) Data frame handling\nI0914 13:17:42.040113 3222 log.go:181] (0xc0006a6780) (5) Data frame sent\nI0914 13:17:42.040125 3222 log.go:181] (0xc0002794a0) Data frame received for 5\nI0914 13:17:42.040218 3222 log.go:181] (0xc0006a6780) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0914 13:17:42.040279 3222 log.go:181] (0xc0006a6780) (5) Data frame sent\nI0914 13:17:42.040451 3222 log.go:181] (0xc0002794a0) Data frame received for 5\nI0914 13:17:42.040473 3222 log.go:181] (0xc0006a6780) (5) Data frame handling\nI0914 13:17:42.040627 3222 log.go:181] (0xc0002794a0) Data frame received for 3\nI0914 13:17:42.040647 3222 log.go:181] (0xc0006a61e0) (3) Data frame handling\nI0914 13:17:42.042125 3222 log.go:181] (0xc0002794a0) Data frame received for 1\nI0914 13:17:42.042142 3222 log.go:181] (0xc0005ac500) (1) Data frame handling\nI0914 13:17:42.042150 3222 log.go:181] (0xc0005ac500) (1) Data frame sent\nI0914 13:17:42.042167 3222 log.go:181] (0xc0002794a0) (0xc0005ac500) Stream removed, broadcasting: 1\nI0914 13:17:42.042181 3222 log.go:181] (0xc0002794a0) Go away received\nI0914 13:17:42.042555 3222 log.go:181] (0xc0002794a0) (0xc0005ac500) Stream removed, broadcasting: 1\nI0914 13:17:42.042570 3222 log.go:181] (0xc0002794a0) (0xc0006a61e0) Stream removed, broadcasting: 3\nI0914 13:17:42.042578 3222 log.go:181] (0xc0002794a0) (0xc0006a6780) Stream removed, broadcasting: 5\n" Sep 14 13:17:42.045: INFO: stdout: "" Sep 14 13:17:42.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3231 execpod-affinitylnnzs -- /bin/sh -x -c nc -zv -t -w 2 10.104.199.12 80' Sep 14 13:17:42.244: INFO: stderr: "I0914 13:17:42.161797 3240 log.go:181] (0xc00097a000) (0xc000c96000) Create stream\nI0914 13:17:42.161841 3240 log.go:181] (0xc00097a000) (0xc000c96000) Stream added, broadcasting: 1\nI0914 13:17:42.163308 3240 log.go:181] (0xc00097a000) Reply frame received for 1\nI0914 13:17:42.163350 3240 log.go:181] (0xc00097a000) (0xc000634000) Create stream\nI0914 13:17:42.163359 3240 log.go:181] (0xc00097a000) (0xc000634000) Stream added, broadcasting: 3\nI0914 13:17:42.164312 3240 log.go:181] (0xc00097a000) Reply frame received for 3\nI0914 13:17:42.164373 3240 log.go:181] (0xc00097a000) (0xc000930b40) Create stream\nI0914 13:17:42.164385 3240 log.go:181] (0xc00097a000) (0xc000930b40) Stream added, broadcasting: 5\nI0914 13:17:42.165658 3240 log.go:181] (0xc00097a000) Reply frame received for 5\nI0914 13:17:42.239537 3240 log.go:181] (0xc00097a000) Data frame received for 3\nI0914 13:17:42.239583 3240 log.go:181] (0xc000634000) (3) Data frame handling\nI0914 13:17:42.239609 3240 log.go:181] (0xc00097a000) Data frame received for 5\nI0914 13:17:42.239621 3240 log.go:181] (0xc000930b40) (5) Data frame handling\nI0914 13:17:42.239634 3240 log.go:181] (0xc000930b40) (5) Data frame sent\nI0914 13:17:42.239649 3240 log.go:181] (0xc00097a000) Data frame received for 5\nI0914 13:17:42.239661 3240 log.go:181] (0xc000930b40) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.199.12 80\nConnection to 10.104.199.12 80 port [tcp/http] succeeded!\nI0914 13:17:42.240903 3240 log.go:181] (0xc00097a000) Data frame received for 1\nI0914 13:17:42.240930 3240 log.go:181] (0xc000c96000) (1) Data frame handling\nI0914 13:17:42.240942 3240 log.go:181] (0xc000c96000) (1) Data frame sent\nI0914 13:17:42.240954 3240 log.go:181] (0xc00097a000) (0xc000c96000) Stream removed, broadcasting: 1\nI0914 13:17:42.240969 3240 log.go:181] (0xc00097a000) Go away received\nI0914 13:17:42.241576 3240 log.go:181] (0xc00097a000) (0xc000c96000) Stream removed, broadcasting: 1\nI0914 13:17:42.241616 3240 log.go:181] (0xc00097a000) (0xc000634000) Stream removed, broadcasting: 3\nI0914 13:17:42.241643 3240 log.go:181] (0xc00097a000) (0xc000930b40) Stream removed, broadcasting: 5\n" Sep 14 13:17:42.244: INFO: stdout: "" Sep 14 13:17:42.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3231 execpod-affinitylnnzs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.199.12:80/ ; done' Sep 14 13:17:42.562: INFO: stderr: "I0914 13:17:42.391255 3258 log.go:181] (0xc0000ecf20) (0xc000b1a780) Create stream\nI0914 13:17:42.391318 3258 log.go:181] (0xc0000ecf20) (0xc000b1a780) Stream added, broadcasting: 1\nI0914 13:17:42.395848 3258 log.go:181] (0xc0000ecf20) Reply frame received for 1\nI0914 13:17:42.395894 3258 log.go:181] (0xc0000ecf20) (0xc000cc80a0) Create stream\nI0914 13:17:42.395914 3258 log.go:181] (0xc0000ecf20) (0xc000cc80a0) Stream added, broadcasting: 3\nI0914 13:17:42.396871 3258 log.go:181] (0xc0000ecf20) Reply frame received for 3\nI0914 13:17:42.396908 3258 log.go:181] (0xc0000ecf20) (0xc000646000) Create stream\nI0914 13:17:42.396923 3258 log.go:181] (0xc0000ecf20) (0xc000646000) Stream added, broadcasting: 5\nI0914 13:17:42.397774 3258 log.go:181] (0xc0000ecf20) Reply frame received for 5\nI0914 13:17:42.456716 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.456759 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.456788 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.456815 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.456828 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.456846 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.460521 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.460539 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.460550 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.460993 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.461011 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.461017 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.461030 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.461049 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.461066 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.467227 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.467252 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.467268 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.468081 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.468117 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.468127 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.468245 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.468287 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.468325 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.472744 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.472762 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.472770 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.473319 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.473342 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.473351 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.473389 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.473413 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.473431 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.478245 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.478264 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.478277 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.479234 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.479262 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.479276 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.479317 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.479331 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.479346 3258 log.go:181] (0xc000646000) (5) Data frame sent\nI0914 13:17:42.479362 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.479378 3258 log.go:181] (0xc000646000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.479410 3258 log.go:181] (0xc000646000) (5) Data frame sent\nI0914 13:17:42.486340 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.486354 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.486360 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.486803 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.486813 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.486820 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.486847 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.486867 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.486885 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.490931 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.490944 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.490950 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.491459 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.491499 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.491519 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.491555 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.491581 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.491598 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.497739 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.497751 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.497757 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.498477 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.498497 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.498521 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.498567 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.498594 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.498616 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.502814 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.502848 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.502867 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.503244 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.503264 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.503279 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.503301 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.503315 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.503330 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.510745 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.510776 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.510807 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.511430 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.511462 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.511493 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.511516 3258 log.go:181] (0xc000646000) (5) Data frame sent\nI0914 13:17:42.511530 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.511546 3258 log.go:181] (0xc000646000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.511571 3258 log.go:181] (0xc000646000) (5) Data frame sent\nI0914 13:17:42.511587 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.511608 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.517318 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.517340 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.517351 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.517870 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.517893 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.517929 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.518004 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.518023 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.518043 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.525615 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.525652 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.525686 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.526287 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.526314 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.526340 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.526357 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.526379 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.526391 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.533338 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.533359 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.533390 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.533728 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.533743 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.533755 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.533809 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.533822 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.533828 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.539794 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.539806 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.539813 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.540318 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.540347 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.540376 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.540468 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.540489 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.540510 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.544727 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.544745 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.544762 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.545287 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.545315 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.545332 3258 log.go:181] (0xc000646000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.545350 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.545421 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.545451 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.550335 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.550358 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.550393 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.550970 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.550995 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.551019 3258 log.go:181] (0xc000646000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.551035 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.551057 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.551078 3258 log.go:181] (0xc000646000) (5) Data frame sent\nI0914 13:17:42.554401 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.554434 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.554468 3258 log.go:181] (0xc000cc80a0) (3) Data frame sent\nI0914 13:17:42.555658 3258 log.go:181] (0xc0000ecf20) Data frame received for 3\nI0914 13:17:42.555683 3258 log.go:181] (0xc000cc80a0) (3) Data frame handling\nI0914 13:17:42.555709 3258 log.go:181] (0xc0000ecf20) Data frame received for 5\nI0914 13:17:42.555727 3258 log.go:181] (0xc000646000) (5) Data frame handling\nI0914 13:17:42.557561 3258 log.go:181] (0xc0000ecf20) Data frame received for 1\nI0914 13:17:42.557594 3258 log.go:181] (0xc000b1a780) (1) Data frame handling\nI0914 13:17:42.557629 3258 log.go:181] (0xc000b1a780) (1) Data frame sent\nI0914 13:17:42.557682 3258 log.go:181] (0xc0000ecf20) (0xc000b1a780) Stream removed, broadcasting: 1\nI0914 13:17:42.557781 3258 log.go:181] (0xc0000ecf20) Go away received\nI0914 13:17:42.558231 3258 log.go:181] (0xc0000ecf20) (0xc000b1a780) Stream removed, broadcasting: 1\nI0914 13:17:42.558258 3258 log.go:181] (0xc0000ecf20) (0xc000cc80a0) Stream removed, broadcasting: 3\nI0914 13:17:42.558271 3258 log.go:181] (0xc0000ecf20) (0xc000646000) Stream removed, broadcasting: 5\n" Sep 14 13:17:42.562: INFO: stdout: "\naffinity-clusterip-transition-xt82v\naffinity-clusterip-transition-xt82v\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-xl2t5\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-xt82v\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-xt82v\naffinity-clusterip-transition-xl2t5\naffinity-clusterip-transition-xl2t5\naffinity-clusterip-transition-xt82v\naffinity-clusterip-transition-xl2t5\naffinity-clusterip-transition-xl2t5\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-xt82v" Sep 14 13:17:42.562: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xl2t5 Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xl2t5 Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xl2t5 Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xl2t5 Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xl2t5 Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.563: INFO: Received response from host: affinity-clusterip-transition-xt82v Sep 14 13:17:42.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3231 execpod-affinitylnnzs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.199.12:80/ ; done' Sep 14 13:17:42.893: INFO: stderr: "I0914 13:17:42.722191 3276 log.go:181] (0xc000ed6370) (0xc0004101e0) Create stream\nI0914 13:17:42.722257 3276 log.go:181] (0xc000ed6370) (0xc0004101e0) Stream added, broadcasting: 1\nI0914 13:17:42.727490 3276 log.go:181] (0xc000ed6370) Reply frame received for 1\nI0914 13:17:42.727539 3276 log.go:181] (0xc000ed6370) (0xc000bfa000) Create stream\nI0914 13:17:42.727553 3276 log.go:181] (0xc000ed6370) (0xc000bfa000) Stream added, broadcasting: 3\nI0914 13:17:42.728533 3276 log.go:181] (0xc000ed6370) Reply frame received for 3\nI0914 13:17:42.728563 3276 log.go:181] (0xc000ed6370) (0xc000bfa0a0) Create stream\nI0914 13:17:42.728570 3276 log.go:181] (0xc000ed6370) (0xc000bfa0a0) Stream added, broadcasting: 5\nI0914 13:17:42.729410 3276 log.go:181] (0xc000ed6370) Reply frame received for 5\nI0914 13:17:42.783318 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.783361 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.783376 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.783420 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.783451 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.783472 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.789321 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.789354 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.789395 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.790102 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.790138 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.790151 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.790169 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.790189 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.790202 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.796367 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.796404 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.796436 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.797286 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.797333 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.797358 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.797384 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.797407 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.797431 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.801573 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.801593 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.801625 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.802545 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.802595 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.802610 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.802629 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.802640 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.802651 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.809422 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.809450 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.809470 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.810132 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.810168 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.810203 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.810245 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.810264 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.810291 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.817281 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.817332 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.817357 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.818240 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.818257 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.818268 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.818382 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.818396 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.818414 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.824830 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.824854 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.824880 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.825345 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.825468 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.825499 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.825527 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.825538 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.825555 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.828925 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.828952 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.828970 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.829525 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.829546 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.829558 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.829576 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.829586 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.829598 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.833957 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.833993 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.834024 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.834468 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.834499 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.834511 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.834530 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.834545 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.834556 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.834569 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.834579 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.834606 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.839890 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.839913 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.839947 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.840678 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.840712 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0914 13:17:42.840736 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.840791 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.840818 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.840838 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.840849 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.840859 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.840881 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n http://10.104.199.12:80/\nI0914 13:17:42.845695 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.845716 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.845736 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.846594 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.846645 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.846674 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.846701 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.846717 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.846731 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.846738 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.846745 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.846761 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.851312 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.851343 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.851422 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.851555 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.851590 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.851649 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.851732 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.851758 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.851777 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.856608 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.856640 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.856655 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.857103 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.857120 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.857132 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\nI0914 13:17:42.857149 3276 log.go:181] (0xc000ed6370) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0914 13:17:42.857166 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.857180 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n http://10.104.199.12:80/\nI0914 13:17:42.857311 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.857342 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.857380 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.863993 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.864020 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.864041 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.864779 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.864826 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.864852 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.864883 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.864908 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.864936 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.870825 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.870855 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.870872 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.871226 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.871243 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.871261 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0914 13:17:42.871274 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.871312 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.871345 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n 2 http://10.104.199.12:80/\nI0914 13:17:42.871381 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.871405 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.871418 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.878643 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.878665 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.878693 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.879456 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.879484 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.879499 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.879518 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.879528 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.879538 3276 log.go:181] (0xc000bfa0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.199.12:80/\nI0914 13:17:42.886188 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.886207 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.886217 3276 log.go:181] (0xc000bfa000) (3) Data frame sent\nI0914 13:17:42.886835 3276 log.go:181] (0xc000ed6370) Data frame received for 5\nI0914 13:17:42.886851 3276 log.go:181] (0xc000bfa0a0) (5) Data frame handling\nI0914 13:17:42.887056 3276 log.go:181] (0xc000ed6370) Data frame received for 3\nI0914 13:17:42.887084 3276 log.go:181] (0xc000bfa000) (3) Data frame handling\nI0914 13:17:42.889146 3276 log.go:181] (0xc000ed6370) Data frame received for 1\nI0914 13:17:42.889161 3276 log.go:181] (0xc0004101e0) (1) Data frame handling\nI0914 13:17:42.889171 3276 log.go:181] (0xc0004101e0) (1) Data frame sent\nI0914 13:17:42.889346 3276 log.go:181] (0xc000ed6370) (0xc0004101e0) Stream removed, broadcasting: 1\nI0914 13:17:42.889365 3276 log.go:181] (0xc000ed6370) Go away received\nI0914 13:17:42.889765 3276 log.go:181] (0xc000ed6370) (0xc0004101e0) Stream removed, broadcasting: 1\nI0914 13:17:42.889786 3276 log.go:181] (0xc000ed6370) (0xc000bfa000) Stream removed, broadcasting: 3\nI0914 13:17:42.889797 3276 log.go:181] (0xc000ed6370) (0xc000bfa0a0) Stream removed, broadcasting: 5\n" Sep 14 13:17:42.894: INFO: stdout: "\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw\naffinity-clusterip-transition-76sdw" Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Received response from host: affinity-clusterip-transition-76sdw Sep 14 13:17:42.895: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3231, will wait for the garbage collector to delete the pods Sep 14 13:17:43.642: INFO: Deleting ReplicationController affinity-clusterip-transition took: 239.424684ms Sep 14 13:17:44.242: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.260262ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:17:56.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3231" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.435 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":272,"skipped":4437,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:17:56.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:17:56.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62" in namespace "downward-api-5857" to be "Succeeded or Failed" Sep 14 13:17:56.237: INFO: Pod "downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62": Phase="Pending", Reason="", readiness=false. Elapsed: 18.615111ms Sep 14 13:17:58.268: INFO: Pod "downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049672335s Sep 14 13:18:00.273: INFO: Pod "downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054017332s STEP: Saw pod success Sep 14 13:18:00.273: INFO: Pod "downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62" satisfied condition "Succeeded or Failed" Sep 14 13:18:00.276: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62 container client-container: STEP: delete the pod Sep 14 13:18:00.359: INFO: Waiting for pod downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62 to disappear Sep 14 13:18:00.384: INFO: Pod downwardapi-volume-d2be12a6-4d4c-4b43-b0e5-c34940e54c62 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:18:00.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5857" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4441,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:18:00.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:18:16.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7293" for this suite. • [SLOW TEST:16.361 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":274,"skipped":4455,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:18:16.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a9dd2324-ff43-44cb-a95f-1e6a473c8b20 STEP: Creating a pod to test consume secrets Sep 14 13:18:16.896: INFO: Waiting up to 5m0s for pod "pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508" in namespace "secrets-994" to be "Succeeded or Failed" Sep 14 13:18:16.914: INFO: Pod "pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508": Phase="Pending", Reason="", readiness=false. Elapsed: 17.638637ms Sep 14 13:18:18.919: INFO: Pod "pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022458643s Sep 14 13:18:20.923: INFO: Pod "pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026928947s STEP: Saw pod success Sep 14 13:18:20.923: INFO: Pod "pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508" satisfied condition "Succeeded or Failed" Sep 14 13:18:20.926: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508 container secret-volume-test: STEP: delete the pod Sep 14 13:18:20.957: INFO: Waiting for pod pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508 to disappear Sep 14 13:18:20.960: INFO: Pod pod-secrets-be2b34df-014e-4ed2-b851-021f47fee508 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:18:20.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-994" for this suite. STEP: Destroying namespace "secret-namespace-8066" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:18:21.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:18:27.144: INFO: DNS probes using dns-test-f4a2eb03-7950-4498-8fd4-b580927396fa succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:18:33.536: INFO: File wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:33.539: INFO: File jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:33.539: INFO: Lookups using dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 failed for: [wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local] Sep 14 13:18:38.545: INFO: File wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:38.549: INFO: File jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:38.549: INFO: Lookups using dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 failed for: [wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local] Sep 14 13:18:43.567: INFO: File wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:43.578: INFO: File jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:43.578: INFO: Lookups using dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 failed for: [wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local] Sep 14 13:18:48.545: INFO: File wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:48.549: INFO: File jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:48.549: INFO: Lookups using dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 failed for: [wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local] Sep 14 13:18:53.550: INFO: File wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:53.553: INFO: File jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local from pod dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 14 13:18:53.553: INFO: Lookups using dns-5056/dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 failed for: [wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local] Sep 14 13:18:58.548: INFO: DNS probes using dns-test-265812d7-842a-433e-b6e2-2a4f5308ec81 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5056.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5056.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:19:05.119: INFO: DNS probes using dns-test-895b079c-66fc-41f8-9f81-908c8b8ef21d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5056" for this suite. • [SLOW TEST:44.201 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":276,"skipped":4527,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:05.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:22.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6626" for this suite. • [SLOW TEST:17.208 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":277,"skipped":4549,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:22.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 14 13:19:22.492: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix379607531/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:22.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8604" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":278,"skipped":4558,"failed":0} ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:22.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-6483/configmap-test-5f79fca6-c1ab-402a-907b-5222648d5190 STEP: Creating a pod to test consume configMaps Sep 14 13:19:22.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313" in namespace "configmap-6483" to be "Succeeded or Failed" Sep 14 13:19:22.683: INFO: Pod "pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313": Phase="Pending", Reason="", readiness=false. Elapsed: 19.878011ms Sep 14 13:19:24.687: INFO: Pod "pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023789644s Sep 14 13:19:26.691: INFO: Pod "pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027035372s STEP: Saw pod success Sep 14 13:19:26.691: INFO: Pod "pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313" satisfied condition "Succeeded or Failed" Sep 14 13:19:26.693: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313 container env-test: STEP: delete the pod Sep 14 13:19:26.731: INFO: Waiting for pod pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313 to disappear Sep 14 13:19:26.740: INFO: Pod pod-configmaps-a3e85758-f36e-40c5-8472-38688f7ea313 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:26.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6483" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4558,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:26.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wqrcn in namespace proxy-2890 I0914 13:19:26.868329 7 runners.go:190] Created replication controller with name: proxy-service-wqrcn, namespace: proxy-2890, replica count: 1 I0914 13:19:27.918829 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:19:28.919152 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:19:29.919429 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:19:30.919628 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0914 13:19:31.919872 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0914 13:19:32.920195 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0914 13:19:33.920412 7 runners.go:190] proxy-service-wqrcn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:19:33.924: INFO: setup took 7.126901317s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 14 13:19:33.933: INFO: (0) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 8.452526ms) Sep 14 13:19:33.933: INFO: (0) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 8.417364ms) Sep 14 13:19:33.933: INFO: (0) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 9.273298ms) Sep 14 13:19:33.933: INFO: (0) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 9.481315ms) Sep 14 13:19:33.934: INFO: (0) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 9.610926ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 12.358552ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 12.362147ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 12.491027ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 12.458043ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 12.512252ms) Sep 14 13:19:33.937: INFO: (0) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 12.785186ms) Sep 14 13:19:33.939: INFO: (0) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 14.857816ms) Sep 14 13:19:33.939: INFO: (0) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 15.035751ms) Sep 14 13:19:33.940: INFO: (0) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 15.47511ms) Sep 14 13:19:33.940: INFO: (0) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 16.042672ms) Sep 14 13:19:33.943: INFO: (0) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 3.495984ms) Sep 14 13:19:33.946: INFO: (1) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 3.543647ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 3.993869ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 3.99706ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.04438ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.182577ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 4.326716ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 4.254692ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.388787ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 4.343934ms) Sep 14 13:19:33.947: INFO: (1) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test (200; 3.645045ms) Sep 14 13:19:33.952: INFO: (2) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 3.868294ms) Sep 14 13:19:33.952: INFO: (2) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 3.907565ms) Sep 14 13:19:33.952: INFO: (2) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 3.84196ms) Sep 14 13:19:33.952: INFO: (2) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.199613ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.435618ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.421107ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.443974ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 4.731872ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 4.751786ms) Sep 14 13:19:33.953: INFO: (2) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 6.261811ms) Sep 14 13:19:33.995: INFO: (3) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 6.963769ms) Sep 14 13:19:33.995: INFO: (3) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 7.036818ms) Sep 14 13:19:33.995: INFO: (3) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 7.013406ms) Sep 14 13:19:33.995: INFO: (3) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 7.114442ms) Sep 14 13:19:33.995: INFO: (3) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 8.08955ms) Sep 14 13:19:33.996: INFO: (3) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 8.108647ms) Sep 14 13:19:33.996: INFO: (3) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 8.151652ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 9.288292ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 9.295667ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 9.348025ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 9.607454ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 9.610115ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 9.563069ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 9.711454ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 9.774155ms) Sep 14 13:19:34.006: INFO: (4) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 9.670895ms) Sep 14 13:19:34.007: INFO: (4) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 11.007645ms) Sep 14 13:19:34.008: INFO: (4) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 11.246766ms) Sep 14 13:19:34.008: INFO: (4) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 11.27829ms) Sep 14 13:19:34.008: INFO: (4) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 11.313913ms) Sep 14 13:19:34.008: INFO: (4) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 11.272803ms) Sep 14 13:19:34.008: INFO: (4) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 11.27125ms) Sep 14 13:19:34.010: INFO: (5) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 2.302455ms) Sep 14 13:19:34.011: INFO: (5) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 3.027138ms) Sep 14 13:19:34.011: INFO: (5) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 3.635241ms) Sep 14 13:19:34.011: INFO: (5) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 3.571481ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 3.765766ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 3.808875ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 4.422603ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 4.490017ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.535603ms) Sep 14 13:19:34.012: INFO: (5) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.619508ms) Sep 14 13:19:34.013: INFO: (5) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 4.944845ms) Sep 14 13:19:34.013: INFO: (5) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 4.997933ms) Sep 14 13:19:34.013: INFO: (5) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 2.963051ms) Sep 14 13:19:34.016: INFO: (6) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 3.29357ms) Sep 14 13:19:34.018: INFO: (6) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 6.348075ms) Sep 14 13:19:34.019: INFO: (6) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 6.282356ms) Sep 14 13:19:34.019: INFO: (6) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 6.281681ms) Sep 14 13:19:34.019: INFO: (6) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 6.301532ms) Sep 14 13:19:34.022: INFO: (7) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 3.04219ms) Sep 14 13:19:34.023: INFO: (7) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 3.236724ms) Sep 14 13:19:34.023: INFO: (7) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 3.796305ms) Sep 14 13:19:34.023: INFO: (7) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 3.993756ms) Sep 14 13:19:34.024: INFO: (7) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 4.178419ms) Sep 14 13:19:34.024: INFO: (7) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test (200; 4.277047ms) Sep 14 13:19:34.024: INFO: (7) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.364589ms) Sep 14 13:19:34.024: INFO: (7) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 4.405372ms) Sep 14 13:19:34.024: INFO: (7) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.490402ms) Sep 14 13:19:34.025: INFO: (7) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 5.642187ms) Sep 14 13:19:34.026: INFO: (7) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 6.188827ms) Sep 14 13:19:34.026: INFO: (7) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 6.293667ms) Sep 14 13:19:34.026: INFO: (7) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 6.236626ms) Sep 14 13:19:34.026: INFO: (7) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 6.37478ms) Sep 14 13:19:34.026: INFO: (7) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 6.455549ms) Sep 14 13:19:34.032: INFO: (8) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 6.540602ms) Sep 14 13:19:34.032: INFO: (8) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 6.491915ms) Sep 14 13:19:34.032: INFO: (8) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 6.531688ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 6.61738ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 6.578451ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 6.566612ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 6.678331ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 7.029496ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 7.045492ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 7.092592ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 7.161844ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 7.251318ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 7.211564ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 7.232946ms) Sep 14 13:19:34.033: INFO: (8) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 5.882328ms) Sep 14 13:19:34.039: INFO: (9) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 6.102881ms) Sep 14 13:19:34.039: INFO: (9) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 6.217263ms) Sep 14 13:19:34.040: INFO: (9) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 6.400453ms) Sep 14 13:19:34.040: INFO: (9) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 6.594647ms) Sep 14 13:19:34.043: INFO: (10) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 3.012983ms) Sep 14 13:19:34.043: INFO: (10) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 4.406594ms) Sep 14 13:19:34.044: INFO: (10) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 4.463151ms) Sep 14 13:19:34.044: INFO: (10) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 4.512874ms) Sep 14 13:19:34.044: INFO: (10) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 4.46956ms) Sep 14 13:19:34.044: INFO: (10) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.490316ms) Sep 14 13:19:34.045: INFO: (10) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 5.254224ms) Sep 14 13:19:34.045: INFO: (10) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 5.536638ms) Sep 14 13:19:34.046: INFO: (10) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 5.483916ms) Sep 14 13:19:34.046: INFO: (10) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 5.46052ms) Sep 14 13:19:34.046: INFO: (10) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 5.641804ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.989012ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 5.038029ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 5.014442ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 5.037779ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.159841ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 5.301241ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 5.360359ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 5.716715ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 5.791919ms) Sep 14 13:19:34.051: INFO: (11) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 4.599169ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 4.768293ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.885289ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.809343ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.856202ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 5.094586ms) Sep 14 13:19:34.057: INFO: (12) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.160411ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 5.659789ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 5.698822ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 5.684326ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 5.719137ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 5.772074ms) Sep 14 13:19:34.058: INFO: (12) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test (200; 3.024182ms) Sep 14 13:19:34.062: INFO: (13) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 3.667688ms) Sep 14 13:19:34.062: INFO: (13) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 3.791853ms) Sep 14 13:19:34.062: INFO: (13) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 3.765158ms) Sep 14 13:19:34.062: INFO: (13) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 3.993187ms) Sep 14 13:19:34.062: INFO: (13) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 4.325989ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.51017ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.443237ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.562902ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 4.509264ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 4.573809ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 4.536764ms) Sep 14 13:19:34.063: INFO: (13) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 5.010882ms) Sep 14 13:19:34.066: INFO: (14) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 2.209116ms) Sep 14 13:19:34.066: INFO: (14) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 2.520238ms) Sep 14 13:19:34.068: INFO: (14) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 4.0609ms) Sep 14 13:19:34.068: INFO: (14) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.351931ms) Sep 14 13:19:34.068: INFO: (14) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.338446ms) Sep 14 13:19:34.068: INFO: (14) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.504489ms) Sep 14 13:19:34.068: INFO: (14) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test (200; 4.987378ms) Sep 14 13:19:34.069: INFO: (14) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.024665ms) Sep 14 13:19:34.069: INFO: (14) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 5.212997ms) Sep 14 13:19:34.069: INFO: (14) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 5.423566ms) Sep 14 13:19:34.069: INFO: (14) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 5.348064ms) Sep 14 13:19:34.073: INFO: (15) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 3.756954ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 4.62227ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 4.590484ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 4.734074ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.841226ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.806652ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.865032ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 4.858219ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 4.92007ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 4.936778ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 5.00214ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 5.091331ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.06456ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 5.041179ms) Sep 14 13:19:34.074: INFO: (15) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test<... (200; 4.172803ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.417372ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 4.417866ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 4.413378ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 4.403362ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 4.446155ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 4.414902ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.653504ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 4.591737ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.600732ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 4.808962ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 4.863399ms) Sep 14 13:19:34.079: INFO: (16) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 4.865872ms) Sep 14 13:19:34.081: INFO: (17) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 2.074035ms) Sep 14 13:19:34.081: INFO: (17) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 2.067341ms) Sep 14 13:19:34.081: INFO: (17) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 2.065415ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:1080/proxy/: ... (200; 3.859393ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 3.910669ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 3.866336ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 3.897709ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 3.950168ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 4.22784ms) Sep 14 13:19:34.083: INFO: (17) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 4.251836ms) Sep 14 13:19:34.084: INFO: (17) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 4.943044ms) Sep 14 13:19:34.084: INFO: (17) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: test (200; 8.074425ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 8.034061ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 8.115508ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 8.051313ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 8.088447ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 8.175799ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname1/proxy/: foo (200; 8.130104ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 8.111724ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 8.306557ms) Sep 14 13:19:34.093: INFO: (18) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 8.318174ms) Sep 14 13:19:34.094: INFO: (18) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 8.861684ms) Sep 14 13:19:34.094: INFO: (18) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 8.850697ms) Sep 14 13:19:34.094: INFO: (18) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 9.092579ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:160/proxy/: foo (200; 4.713858ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:1080/proxy/: test<... (200; 4.678712ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname2/proxy/: bar (200; 5.104659ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:443/proxy/: ... (200; 5.331745ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/pods/http:proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.30958ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb/proxy/: test (200; 5.462093ms) Sep 14 13:19:34.099: INFO: (19) /api/v1/namespaces/proxy-2890/services/proxy-service-wqrcn:portname1/proxy/: foo (200; 5.503867ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:460/proxy/: tls baz (200; 5.651754ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/pods/proxy-service-wqrcn-cjmwb:162/proxy/: bar (200; 5.58738ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname2/proxy/: tls qux (200; 5.629291ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/pods/https:proxy-service-wqrcn-cjmwb:462/proxy/: tls qux (200; 5.615315ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/services/https:proxy-service-wqrcn:tlsportname1/proxy/: tls baz (200; 5.610986ms) Sep 14 13:19:34.100: INFO: (19) /api/v1/namespaces/proxy-2890/services/http:proxy-service-wqrcn:portname2/proxy/: bar (200; 5.624988ms) STEP: deleting ReplicationController proxy-service-wqrcn in namespace proxy-2890, will wait for the garbage collector to delete the pods Sep 14 13:19:34.159: INFO: Deleting ReplicationController proxy-service-wqrcn took: 7.461309ms Sep 14 13:19:34.559: INFO: Terminating ReplicationController proxy-service-wqrcn pods took: 400.21239ms [AfterEach] version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:36.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2890" for this suite. • [SLOW TEST:9.830 seconds] [sig-network] Proxy /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":280,"skipped":4562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:36.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:51.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2525" for this suite. STEP: Destroying namespace "nsdeletetest-4475" for this suite. Sep 14 13:19:51.832: INFO: Namespace nsdeletetest-4475 was already deleted STEP: Destroying namespace "nsdeletetest-4487" for this suite. • [SLOW TEST:15.256 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":281,"skipped":4607,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:51.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-38f11b62-bfff-4191-8286-6ccdc94730b0 STEP: Creating a pod to test consume configMaps Sep 14 13:19:51.958: INFO: Waiting up to 5m0s for pod "pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972" in namespace "configmap-4793" to be "Succeeded or Failed" Sep 14 13:19:51.970: INFO: Pod "pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972": Phase="Pending", Reason="", readiness=false. Elapsed: 12.360295ms Sep 14 13:19:53.975: INFO: Pod "pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017795956s Sep 14 13:19:55.980: INFO: Pod "pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02268644s STEP: Saw pod success Sep 14 13:19:55.980: INFO: Pod "pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972" satisfied condition "Succeeded or Failed" Sep 14 13:19:55.983: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972 container configmap-volume-test: STEP: delete the pod Sep 14 13:19:56.037: INFO: Waiting for pod pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972 to disappear Sep 14 13:19:56.077: INFO: Pod pod-configmaps-15e4a242-e82c-4162-803f-3d0bf4dfc972 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:19:56.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4793" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:19:56.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:20:07.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3486" for this suite. • [SLOW TEST:11.154 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":283,"skipped":4655,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:20:07.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 14 13:20:07.286: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:20:22.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1985" for this suite. • [SLOW TEST:15.754 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":284,"skipped":4656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:20:22.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-1b4fab2e-79d4-4257-a8b8-8d444b9697cb STEP: Creating a pod to test consume configMaps Sep 14 13:20:23.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0" in namespace "projected-7086" to be "Succeeded or Failed" Sep 14 13:20:23.097: INFO: Pod "pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.45488ms Sep 14 13:20:25.102: INFO: Pod "pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02090963s Sep 14 13:20:27.107: INFO: Pod "pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026013749s STEP: Saw pod success Sep 14 13:20:27.107: INFO: Pod "pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0" satisfied condition "Succeeded or Failed" Sep 14 13:20:27.110: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0 container projected-configmap-volume-test: STEP: delete the pod Sep 14 13:20:27.150: INFO: Waiting for pod pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0 to disappear Sep 14 13:20:27.162: INFO: Pod pod-projected-configmaps-fa6b9408-7de8-4d32-9be4-691c06caf4d0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:20:27.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7086" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4688,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:20:27.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 14 13:20:28.144: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 14 13:20:30.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 14 13:20:32.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63735686428, loc:(*time.Location)(0x7702840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 14 13:20:35.245: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:20:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2118" for this suite. STEP: Destroying namespace "webhook-2118-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.384 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":286,"skipped":4700,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:20:35.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9566.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9566.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9566.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9566.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.141.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.141.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.141.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.141.5_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9566.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9566.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9566.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9566.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9566.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9566.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 5.141.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.141.5_udp@PTR;check="$$(dig +tcp +noall +answer +search 5.141.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.141.5_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:20:42.100: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.103: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.105: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.108: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.130: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.138: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:42.157: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:20:47.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.198: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.200: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.204: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.207: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:47.245: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:20:52.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.195: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.201: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.204: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:52.222: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:20:57.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.173: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.192: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.195: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.197: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.200: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:20:57.216: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:21:02.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.172: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.196: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.203: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.206: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:02.225: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:21:07.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.168: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.194: INFO: Unable to read jessie_udp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.199: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.202: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local from pod dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c: the server could not find the requested resource (get pods dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c) Sep 14 13:21:07.220: INFO: Lookups using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c failed for: [wheezy_udp@dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@dns-test-service.dns-9566.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_udp@dns-test-service.dns-9566.svc.cluster.local jessie_tcp@dns-test-service.dns-9566.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9566.svc.cluster.local] Sep 14 13:21:12.226: INFO: DNS probes using dns-9566/dns-test-ddc80d09-cd25-4236-99ab-034a2fd55a9c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:12.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9566" for this suite. • [SLOW TEST:37.160 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":287,"skipped":4705,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:12.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 14 13:21:13.100: INFO: Waiting up to 5m0s for pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c" in namespace "var-expansion-8666" to be "Succeeded or Failed" Sep 14 13:21:13.104: INFO: Pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.89775ms Sep 14 13:21:16.820: INFO: Pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.719415749s Sep 14 13:21:18.887: INFO: Pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.786434463s Sep 14 13:21:20.892: INFO: Pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.791944336s STEP: Saw pod success Sep 14 13:21:20.892: INFO: Pod "var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c" satisfied condition "Succeeded or Failed" Sep 14 13:21:20.895: INFO: Trying to get logs from node latest-worker2 pod var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c container dapi-container: STEP: delete the pod Sep 14 13:21:20.967: INFO: Waiting for pod var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c to disappear Sep 14 13:21:20.972: INFO: Pod var-expansion-3232e883-c32d-4959-b0b0-94d99ae7e08c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:20.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8666" for this suite. • [SLOW TEST:8.265 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4719,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:20.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:21.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1759" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":289,"skipped":4737,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:21.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:21:21.121: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:21.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6541" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":290,"skipped":4741,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:21.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3684 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3684 I0914 13:21:21.961098 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3684, replica count: 2 I0914 13:21:25.011477 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0914 13:21:28.011804 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 14 13:21:28.011: INFO: Creating new exec pod Sep 14 13:21:33.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodbj8kr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 14 13:21:33.291: INFO: stderr: "I0914 13:21:33.184819 3312 log.go:181] (0xc00085adc0) (0xc000754780) Create stream\nI0914 13:21:33.184857 3312 log.go:181] (0xc00085adc0) (0xc000754780) Stream added, broadcasting: 1\nI0914 13:21:33.190203 3312 log.go:181] (0xc00085adc0) Reply frame received for 1\nI0914 13:21:33.190244 3312 log.go:181] (0xc00085adc0) (0xc000556320) Create stream\nI0914 13:21:33.190255 3312 log.go:181] (0xc00085adc0) (0xc000556320) Stream added, broadcasting: 3\nI0914 13:21:33.191260 3312 log.go:181] (0xc00085adc0) Reply frame received for 3\nI0914 13:21:33.191293 3312 log.go:181] (0xc00085adc0) (0xc000b700a0) Create stream\nI0914 13:21:33.191318 3312 log.go:181] (0xc00085adc0) (0xc000b700a0) Stream added, broadcasting: 5\nI0914 13:21:33.192349 3312 log.go:181] (0xc00085adc0) Reply frame received for 5\nI0914 13:21:33.286340 3312 log.go:181] (0xc00085adc0) Data frame received for 5\nI0914 13:21:33.286375 3312 log.go:181] (0xc000b700a0) (5) Data frame handling\nI0914 13:21:33.286396 3312 log.go:181] (0xc000b700a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0914 13:21:33.286621 3312 log.go:181] (0xc00085adc0) Data frame received for 5\nI0914 13:21:33.286657 3312 log.go:181] (0xc000b700a0) (5) Data frame handling\nI0914 13:21:33.286692 3312 log.go:181] (0xc000b700a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0914 13:21:33.287071 3312 log.go:181] (0xc00085adc0) Data frame received for 3\nI0914 13:21:33.287091 3312 log.go:181] (0xc000556320) (3) Data frame handling\nI0914 13:21:33.287355 3312 log.go:181] (0xc00085adc0) Data frame received for 5\nI0914 13:21:33.287367 3312 log.go:181] (0xc000b700a0) (5) Data frame handling\nI0914 13:21:33.288808 3312 log.go:181] (0xc00085adc0) Data frame received for 1\nI0914 13:21:33.288823 3312 log.go:181] (0xc000754780) (1) Data frame handling\nI0914 13:21:33.288829 3312 log.go:181] (0xc000754780) (1) Data frame sent\nI0914 13:21:33.288839 3312 log.go:181] (0xc00085adc0) (0xc000754780) Stream removed, broadcasting: 1\nI0914 13:21:33.288922 3312 log.go:181] (0xc00085adc0) Go away received\nI0914 13:21:33.289156 3312 log.go:181] (0xc00085adc0) (0xc000754780) Stream removed, broadcasting: 1\nI0914 13:21:33.289173 3312 log.go:181] (0xc00085adc0) (0xc000556320) Stream removed, broadcasting: 3\nI0914 13:21:33.289184 3312 log.go:181] (0xc00085adc0) (0xc000b700a0) Stream removed, broadcasting: 5\n" Sep 14 13:21:33.291: INFO: stdout: "" Sep 14 13:21:33.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=services-3684 execpodbj8kr -- /bin/sh -x -c nc -zv -t -w 2 10.104.79.227 80' Sep 14 13:21:33.519: INFO: stderr: "I0914 13:21:33.428294 3330 log.go:181] (0xc000eb0fd0) (0xc0003bff40) Create stream\nI0914 13:21:33.428352 3330 log.go:181] (0xc000eb0fd0) (0xc0003bff40) Stream added, broadcasting: 1\nI0914 13:21:33.434736 3330 log.go:181] (0xc000eb0fd0) Reply frame received for 1\nI0914 13:21:33.434790 3330 log.go:181] (0xc000eb0fd0) (0xc000a1c280) Create stream\nI0914 13:21:33.434818 3330 log.go:181] (0xc000eb0fd0) (0xc000a1c280) Stream added, broadcasting: 3\nI0914 13:21:33.435857 3330 log.go:181] (0xc000eb0fd0) Reply frame received for 3\nI0914 13:21:33.435893 3330 log.go:181] (0xc000eb0fd0) (0xc000aa60a0) Create stream\nI0914 13:21:33.435921 3330 log.go:181] (0xc000eb0fd0) (0xc000aa60a0) Stream added, broadcasting: 5\nI0914 13:21:33.437019 3330 log.go:181] (0xc000eb0fd0) Reply frame received for 5\nI0914 13:21:33.513499 3330 log.go:181] (0xc000eb0fd0) Data frame received for 3\nI0914 13:21:33.513540 3330 log.go:181] (0xc000a1c280) (3) Data frame handling\nI0914 13:21:33.513572 3330 log.go:181] (0xc000eb0fd0) Data frame received for 5\nI0914 13:21:33.513581 3330 log.go:181] (0xc000aa60a0) (5) Data frame handling\nI0914 13:21:33.513588 3330 log.go:181] (0xc000aa60a0) (5) Data frame sent\nI0914 13:21:33.513593 3330 log.go:181] (0xc000eb0fd0) Data frame received for 5\nI0914 13:21:33.513598 3330 log.go:181] (0xc000aa60a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.79.227 80\nConnection to 10.104.79.227 80 port [tcp/http] succeeded!\nI0914 13:21:33.514681 3330 log.go:181] (0xc000eb0fd0) Data frame received for 1\nI0914 13:21:33.514709 3330 log.go:181] (0xc0003bff40) (1) Data frame handling\nI0914 13:21:33.514726 3330 log.go:181] (0xc0003bff40) (1) Data frame sent\nI0914 13:21:33.514746 3330 log.go:181] (0xc000eb0fd0) (0xc0003bff40) Stream removed, broadcasting: 1\nI0914 13:21:33.514797 3330 log.go:181] (0xc000eb0fd0) Go away received\nI0914 13:21:33.515254 3330 log.go:181] (0xc000eb0fd0) (0xc0003bff40) Stream removed, broadcasting: 1\nI0914 13:21:33.515276 3330 log.go:181] (0xc000eb0fd0) (0xc000a1c280) Stream removed, broadcasting: 3\nI0914 13:21:33.515286 3330 log.go:181] (0xc000eb0fd0) (0xc000aa60a0) Stream removed, broadcasting: 5\n" Sep 14 13:21:33.519: INFO: stdout: "" Sep 14 13:21:33.519: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3684" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.785 seconds] [sig-network] Services /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":291,"skipped":4743,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:33.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 14 13:21:33.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9506' Sep 14 13:21:34.029: INFO: stderr: "" Sep 14 13:21:34.029: INFO: stdout: "pod/pause created\n" Sep 14 13:21:34.029: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 14 13:21:34.029: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9506" to be "running and ready" Sep 14 13:21:34.034: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.623395ms Sep 14 13:21:36.039: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009852662s Sep 14 13:21:38.043: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014075724s Sep 14 13:21:38.043: INFO: Pod "pause" satisfied condition "running and ready" Sep 14 13:21:38.043: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 14 13:21:38.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9506' Sep 14 13:21:38.150: INFO: stderr: "" Sep 14 13:21:38.150: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 14 13:21:38.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9506' Sep 14 13:21:38.245: INFO: stderr: "" Sep 14 13:21:38.245: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 14 13:21:38.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9506' Sep 14 13:21:38.347: INFO: stderr: "" Sep 14 13:21:38.347: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 14 13:21:38.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9506' Sep 14 13:21:38.460: INFO: stderr: "" Sep 14 13:21:38.460: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 14 13:21:38.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9506' Sep 14 13:21:38.598: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 14 13:21:38.598: INFO: stdout: "pod \"pause\" force deleted\n" Sep 14 13:21:38.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9506' Sep 14 13:21:38.771: INFO: stderr: "No resources found in kubectl-9506 namespace.\n" Sep 14 13:21:38.771: INFO: stdout: "" Sep 14 13:21:38.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9506 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 14 13:21:38.995: INFO: stderr: "" Sep 14 13:21:38.995: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:38.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9506" for this suite. • [SLOW TEST:5.772 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":292,"skipped":4756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:39.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 14 13:21:39.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b" in namespace "projected-667" to be "Succeeded or Failed" Sep 14 13:21:39.550: INFO: Pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.058548ms Sep 14 13:21:41.573: INFO: Pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048426876s Sep 14 13:21:43.578: INFO: Pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b": Phase="Running", Reason="", readiness=true. Elapsed: 4.053201823s Sep 14 13:21:45.583: INFO: Pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057725556s STEP: Saw pod success Sep 14 13:21:45.583: INFO: Pod "downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b" satisfied condition "Succeeded or Failed" Sep 14 13:21:45.586: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b container client-container: STEP: delete the pod Sep 14 13:21:45.616: INFO: Waiting for pod downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b to disappear Sep 14 13:21:45.671: INFO: Pod downwardapi-volume-eae5bed8-4a15-4197-ad5b-f9dfe698882b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:45.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-667" for this suite. • [SLOW TEST:6.361 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:45.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:21:49.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1766" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4827,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:21:49.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9075 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 14 13:21:50.009: INFO: Found 0 stateful pods, waiting for 3 Sep 14 13:22:00.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 13:22:00.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 13:22:00.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Sep 14 13:22:10.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 14 13:22:10.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 14 13:22:10.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 14 13:22:10.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9075 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 13:22:10.305: INFO: stderr: "I0914 13:22:10.170259 3493 log.go:181] (0xc00026c000) (0xc00089a960) Create stream\nI0914 13:22:10.170344 3493 log.go:181] (0xc00026c000) (0xc00089a960) Stream added, broadcasting: 1\nI0914 13:22:10.172911 3493 log.go:181] (0xc00026c000) Reply frame received for 1\nI0914 13:22:10.172951 3493 log.go:181] (0xc00026c000) (0xc000376320) Create stream\nI0914 13:22:10.172967 3493 log.go:181] (0xc00026c000) (0xc000376320) Stream added, broadcasting: 3\nI0914 13:22:10.173835 3493 log.go:181] (0xc00026c000) Reply frame received for 3\nI0914 13:22:10.173860 3493 log.go:181] (0xc00026c000) (0xc0005d0000) Create stream\nI0914 13:22:10.173867 3493 log.go:181] (0xc00026c000) (0xc0005d0000) Stream added, broadcasting: 5\nI0914 13:22:10.174734 3493 log.go:181] (0xc00026c000) Reply frame received for 5\nI0914 13:22:10.259827 3493 log.go:181] (0xc00026c000) Data frame received for 5\nI0914 13:22:10.259849 3493 log.go:181] (0xc0005d0000) (5) Data frame handling\nI0914 13:22:10.259862 3493 log.go:181] (0xc0005d0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 13:22:10.294678 3493 log.go:181] (0xc00026c000) Data frame received for 3\nI0914 13:22:10.294722 3493 log.go:181] (0xc000376320) (3) Data frame handling\nI0914 13:22:10.294745 3493 log.go:181] (0xc000376320) (3) Data frame sent\nI0914 13:22:10.294955 3493 log.go:181] (0xc00026c000) Data frame received for 5\nI0914 13:22:10.294995 3493 log.go:181] (0xc0005d0000) (5) Data frame handling\nI0914 13:22:10.295088 3493 log.go:181] (0xc00026c000) Data frame received for 3\nI0914 13:22:10.295143 3493 log.go:181] (0xc000376320) (3) Data frame handling\nI0914 13:22:10.300841 3493 log.go:181] (0xc00026c000) Data frame received for 1\nI0914 13:22:10.300869 3493 log.go:181] (0xc00089a960) (1) Data frame handling\nI0914 13:22:10.300904 3493 log.go:181] (0xc00089a960) (1) Data frame sent\nI0914 13:22:10.300935 3493 log.go:181] (0xc00026c000) (0xc00089a960) Stream removed, broadcasting: 1\nI0914 13:22:10.300959 3493 log.go:181] (0xc00026c000) Go away received\nI0914 13:22:10.301241 3493 log.go:181] (0xc00026c000) (0xc00089a960) Stream removed, broadcasting: 1\nI0914 13:22:10.301258 3493 log.go:181] (0xc00026c000) (0xc000376320) Stream removed, broadcasting: 3\nI0914 13:22:10.301265 3493 log.go:181] (0xc00026c000) (0xc0005d0000) Stream removed, broadcasting: 5\n" Sep 14 13:22:10.305: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 13:22:10.305: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 14 13:22:20.347: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 14 13:22:30.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9075 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 13:22:30.580: INFO: stderr: "I0914 13:22:30.510332 3511 log.go:181] (0xc000d1b080) (0xc000d12820) Create stream\nI0914 13:22:30.510386 3511 log.go:181] (0xc000d1b080) (0xc000d12820) Stream added, broadcasting: 1\nI0914 13:22:30.513309 3511 log.go:181] (0xc000d1b080) Reply frame received for 1\nI0914 13:22:30.513365 3511 log.go:181] (0xc000d1b080) (0xc0005401e0) Create stream\nI0914 13:22:30.513399 3511 log.go:181] (0xc000d1b080) (0xc0005401e0) Stream added, broadcasting: 3\nI0914 13:22:30.514530 3511 log.go:181] (0xc000d1b080) Reply frame received for 3\nI0914 13:22:30.514562 3511 log.go:181] (0xc000d1b080) (0xc000d128c0) Create stream\nI0914 13:22:30.514581 3511 log.go:181] (0xc000d1b080) (0xc000d128c0) Stream added, broadcasting: 5\nI0914 13:22:30.515611 3511 log.go:181] (0xc000d1b080) Reply frame received for 5\nI0914 13:22:30.574493 3511 log.go:181] (0xc000d1b080) Data frame received for 3\nI0914 13:22:30.574527 3511 log.go:181] (0xc0005401e0) (3) Data frame handling\nI0914 13:22:30.574547 3511 log.go:181] (0xc0005401e0) (3) Data frame sent\nI0914 13:22:30.574556 3511 log.go:181] (0xc000d1b080) Data frame received for 3\nI0914 13:22:30.574564 3511 log.go:181] (0xc0005401e0) (3) Data frame handling\nI0914 13:22:30.574607 3511 log.go:181] (0xc000d1b080) Data frame received for 5\nI0914 13:22:30.574617 3511 log.go:181] (0xc000d128c0) (5) Data frame handling\nI0914 13:22:30.574624 3511 log.go:181] (0xc000d128c0) (5) Data frame sent\nI0914 13:22:30.574629 3511 log.go:181] (0xc000d1b080) Data frame received for 5\nI0914 13:22:30.574634 3511 log.go:181] (0xc000d128c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 13:22:30.576517 3511 log.go:181] (0xc000d1b080) Data frame received for 1\nI0914 13:22:30.576551 3511 log.go:181] (0xc000d12820) (1) Data frame handling\nI0914 13:22:30.576592 3511 log.go:181] (0xc000d12820) (1) Data frame sent\nI0914 13:22:30.576625 3511 log.go:181] (0xc000d1b080) (0xc000d12820) Stream removed, broadcasting: 1\nI0914 13:22:30.576780 3511 log.go:181] (0xc000d1b080) Go away received\nI0914 13:22:30.577221 3511 log.go:181] (0xc000d1b080) (0xc000d12820) Stream removed, broadcasting: 1\nI0914 13:22:30.577247 3511 log.go:181] (0xc000d1b080) (0xc0005401e0) Stream removed, broadcasting: 3\nI0914 13:22:30.577259 3511 log.go:181] (0xc000d1b080) (0xc000d128c0) Stream removed, broadcasting: 5\n" Sep 14 13:22:30.580: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 13:22:30.580: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 13:22:50.609: INFO: Waiting for StatefulSet statefulset-9075/ss2 to complete update STEP: Rolling back to a previous revision Sep 14 13:23:00.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9075 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 14 13:23:03.476: INFO: stderr: "I0914 13:23:03.341257 3530 log.go:181] (0xc0000b8000) (0xc0005c0320) Create stream\nI0914 13:23:03.341323 3530 log.go:181] (0xc0000b8000) (0xc0005c0320) Stream added, broadcasting: 1\nI0914 13:23:03.343159 3530 log.go:181] (0xc0000b8000) Reply frame received for 1\nI0914 13:23:03.343193 3530 log.go:181] (0xc0000b8000) (0xc0005c1860) Create stream\nI0914 13:23:03.343205 3530 log.go:181] (0xc0000b8000) (0xc0005c1860) Stream added, broadcasting: 3\nI0914 13:23:03.344249 3530 log.go:181] (0xc0000b8000) Reply frame received for 3\nI0914 13:23:03.344299 3530 log.go:181] (0xc0000b8000) (0xc00061b220) Create stream\nI0914 13:23:03.344313 3530 log.go:181] (0xc0000b8000) (0xc00061b220) Stream added, broadcasting: 5\nI0914 13:23:03.345165 3530 log.go:181] (0xc0000b8000) Reply frame received for 5\nI0914 13:23:03.426359 3530 log.go:181] (0xc0000b8000) Data frame received for 5\nI0914 13:23:03.426390 3530 log.go:181] (0xc00061b220) (5) Data frame handling\nI0914 13:23:03.426413 3530 log.go:181] (0xc00061b220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0914 13:23:03.468847 3530 log.go:181] (0xc0000b8000) Data frame received for 5\nI0914 13:23:03.468874 3530 log.go:181] (0xc00061b220) (5) Data frame handling\nI0914 13:23:03.468969 3530 log.go:181] (0xc0000b8000) Data frame received for 3\nI0914 13:23:03.469020 3530 log.go:181] (0xc0005c1860) (3) Data frame handling\nI0914 13:23:03.469052 3530 log.go:181] (0xc0005c1860) (3) Data frame sent\nI0914 13:23:03.469073 3530 log.go:181] (0xc0000b8000) Data frame received for 3\nI0914 13:23:03.469090 3530 log.go:181] (0xc0005c1860) (3) Data frame handling\nI0914 13:23:03.472831 3530 log.go:181] (0xc0000b8000) Data frame received for 1\nI0914 13:23:03.472868 3530 log.go:181] (0xc0005c0320) (1) Data frame handling\nI0914 13:23:03.472898 3530 log.go:181] (0xc0005c0320) (1) Data frame sent\nI0914 13:23:03.472921 3530 log.go:181] (0xc0000b8000) (0xc0005c0320) Stream removed, broadcasting: 1\nI0914 13:23:03.472950 3530 log.go:181] (0xc0000b8000) Go away received\nI0914 13:23:03.473461 3530 log.go:181] (0xc0000b8000) (0xc0005c0320) Stream removed, broadcasting: 1\nI0914 13:23:03.473486 3530 log.go:181] (0xc0000b8000) (0xc0005c1860) Stream removed, broadcasting: 3\nI0914 13:23:03.473511 3530 log.go:181] (0xc0000b8000) (0xc00061b220) Stream removed, broadcasting: 5\n" Sep 14 13:23:03.476: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 14 13:23:03.476: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 14 13:23:13.510: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 14 13:23:23.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42909 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9075 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 14 13:23:23.785: INFO: stderr: "I0914 13:23:23.683473 3551 log.go:181] (0xc0008a2000) (0xc0007a81e0) Create stream\nI0914 13:23:23.683518 3551 log.go:181] (0xc0008a2000) (0xc0007a81e0) Stream added, broadcasting: 1\nI0914 13:23:23.685519 3551 log.go:181] (0xc0008a2000) Reply frame received for 1\nI0914 13:23:23.685573 3551 log.go:181] (0xc0008a2000) (0xc0007a83c0) Create stream\nI0914 13:23:23.685586 3551 log.go:181] (0xc0008a2000) (0xc0007a83c0) Stream added, broadcasting: 3\nI0914 13:23:23.686548 3551 log.go:181] (0xc0008a2000) Reply frame received for 3\nI0914 13:23:23.686580 3551 log.go:181] (0xc0008a2000) (0xc00098e000) Create stream\nI0914 13:23:23.686589 3551 log.go:181] (0xc0008a2000) (0xc00098e000) Stream added, broadcasting: 5\nI0914 13:23:23.687501 3551 log.go:181] (0xc0008a2000) Reply frame received for 5\nI0914 13:23:23.778818 3551 log.go:181] (0xc0008a2000) Data frame received for 3\nI0914 13:23:23.778851 3551 log.go:181] (0xc0007a83c0) (3) Data frame handling\nI0914 13:23:23.778864 3551 log.go:181] (0xc0007a83c0) (3) Data frame sent\nI0914 13:23:23.778873 3551 log.go:181] (0xc0008a2000) Data frame received for 3\nI0914 13:23:23.778882 3551 log.go:181] (0xc0007a83c0) (3) Data frame handling\nI0914 13:23:23.778944 3551 log.go:181] (0xc0008a2000) Data frame received for 5\nI0914 13:23:23.778977 3551 log.go:181] (0xc00098e000) (5) Data frame handling\nI0914 13:23:23.779020 3551 log.go:181] (0xc00098e000) (5) Data frame sent\nI0914 13:23:23.779043 3551 log.go:181] (0xc0008a2000) Data frame received for 5\nI0914 13:23:23.779063 3551 log.go:181] (0xc00098e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0914 13:23:23.780517 3551 log.go:181] (0xc0008a2000) Data frame received for 1\nI0914 13:23:23.780530 3551 log.go:181] (0xc0007a81e0) (1) Data frame handling\nI0914 13:23:23.780538 3551 log.go:181] (0xc0007a81e0) (1) Data frame sent\nI0914 13:23:23.780743 3551 log.go:181] (0xc0008a2000) (0xc0007a81e0) Stream removed, broadcasting: 1\nI0914 13:23:23.780775 3551 log.go:181] (0xc0008a2000) Go away received\nI0914 13:23:23.781261 3551 log.go:181] (0xc0008a2000) (0xc0007a81e0) Stream removed, broadcasting: 1\nI0914 13:23:23.781288 3551 log.go:181] (0xc0008a2000) (0xc0007a83c0) Stream removed, broadcasting: 3\nI0914 13:23:23.781307 3551 log.go:181] (0xc0008a2000) (0xc00098e000) Stream removed, broadcasting: 5\n" Sep 14 13:23:23.785: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 14 13:23:23.785: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 14 13:23:33.808: INFO: Waiting for StatefulSet statefulset-9075/ss2 to complete update Sep 14 13:23:33.808: INFO: Waiting for Pod statefulset-9075/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 14 13:23:33.808: INFO: Waiting for Pod statefulset-9075/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 14 13:23:33.808: INFO: Waiting for Pod statefulset-9075/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 14 13:23:43.838: INFO: Waiting for StatefulSet statefulset-9075/ss2 to complete update Sep 14 13:23:43.838: INFO: Waiting for Pod statefulset-9075/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 14 13:23:53.817: INFO: Deleting all statefulset in ns statefulset-9075 Sep 14 13:23:53.820: INFO: Scaling statefulset ss2 to 0 Sep 14 13:24:13.857: INFO: Waiting for statefulset status.replicas updated to 0 Sep 14 13:24:13.860: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:24:13.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9075" for this suite. • [SLOW TEST:143.958 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":295,"skipped":4833,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:24:13.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-f72256ba-4a0b-44d2-9609-547e3291ba88 STEP: Creating a pod to test consume configMaps Sep 14 13:24:14.009: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23" in namespace "projected-7791" to be "Succeeded or Failed" Sep 14 13:24:14.014: INFO: Pod "pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23": Phase="Pending", Reason="", readiness=false. Elapsed: 5.020691ms Sep 14 13:24:16.019: INFO: Pod "pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009982333s Sep 14 13:24:18.024: INFO: Pod "pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014945857s STEP: Saw pod success Sep 14 13:24:18.024: INFO: Pod "pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23" satisfied condition "Succeeded or Failed" Sep 14 13:24:18.030: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23 container projected-configmap-volume-test: STEP: delete the pod Sep 14 13:24:18.080: INFO: Waiting for pod pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23 to disappear Sep 14 13:24:18.091: INFO: Pod pod-projected-configmaps-b03d90e4-84e7-49e6-b64a-d38333c3bc23 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:24:18.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7791" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":296,"skipped":4844,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:24:18.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:24:18.212: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 14 13:24:18.219: INFO: Number of nodes with available pods: 0 Sep 14 13:24:18.219: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 14 13:24:18.283: INFO: Number of nodes with available pods: 0 Sep 14 13:24:18.283: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:19.296: INFO: Number of nodes with available pods: 0 Sep 14 13:24:19.296: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:20.448: INFO: Number of nodes with available pods: 0 Sep 14 13:24:20.448: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:21.288: INFO: Number of nodes with available pods: 0 Sep 14 13:24:21.288: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:22.289: INFO: Number of nodes with available pods: 1 Sep 14 13:24:22.289: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 14 13:24:22.324: INFO: Number of nodes with available pods: 1 Sep 14 13:24:22.324: INFO: Number of running nodes: 0, number of available pods: 1 Sep 14 13:24:23.327: INFO: Number of nodes with available pods: 0 Sep 14 13:24:23.327: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 14 13:24:23.370: INFO: Number of nodes with available pods: 0 Sep 14 13:24:23.370: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:24.374: INFO: Number of nodes with available pods: 0 Sep 14 13:24:24.374: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:25.374: INFO: Number of nodes with available pods: 0 Sep 14 13:24:25.374: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:26.374: INFO: Number of nodes with available pods: 0 Sep 14 13:24:26.374: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:27.549: INFO: Number of nodes with available pods: 0 Sep 14 13:24:27.549: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:28.374: INFO: Number of nodes with available pods: 0 Sep 14 13:24:28.374: INFO: Node latest-worker is running more than one daemon pod Sep 14 13:24:29.374: INFO: Number of nodes with available pods: 1 Sep 14 13:24:29.374: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-728, will wait for the garbage collector to delete the pods Sep 14 13:24:29.439: INFO: Deleting DaemonSet.extensions daemon-set took: 5.427371ms Sep 14 13:24:29.540: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.240716ms Sep 14 13:24:35.543: INFO: Number of nodes with available pods: 0 Sep 14 13:24:35.543: INFO: Number of running nodes: 0, number of available pods: 0 Sep 14 13:24:35.546: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-728/daemonsets","resourceVersion":"284803"},"items":null} Sep 14 13:24:35.549: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-728/pods","resourceVersion":"284803"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:24:35.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-728" for this suite. • [SLOW TEST:17.513 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":297,"skipped":4862,"failed":0} [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:24:35.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4136.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4136.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 14 13:24:41.764: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.767: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.771: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.773: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.783: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.787: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.793: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:41.801: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:24:46.805: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.812: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.814: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.817: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.846: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.848: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.851: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.853: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:46.859: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:24:51.805: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.809: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.812: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.815: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.824: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.827: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.830: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.833: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:51.838: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:24:56.806: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.810: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.813: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.817: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.825: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.828: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.831: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.834: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:24:56.839: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:25:01.805: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.809: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.812: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.815: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.825: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.828: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.832: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.835: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:01.842: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:25:06.804: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.807: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.810: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.812: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.820: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.822: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.824: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.825: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local from pod dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074: the server could not find the requested resource (get pods dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074) Sep 14 13:25:06.829: INFO: Lookups using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4136.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4136.svc.cluster.local jessie_udp@dns-test-service-2.dns-4136.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4136.svc.cluster.local] Sep 14 13:25:12.010: INFO: DNS probes using dns-4136/dns-test-d585e9fc-09ce-4272-af27-4700bb9e4074 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:25:13.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4136" for this suite. • [SLOW TEST:38.136 seconds] [sig-network] DNS /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":298,"skipped":4862,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:25:13.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 14 13:25:22.270: INFO: &Pod{ObjectMeta:{send-events-41c63fe6-26ef-4446-8bad-95981d174e1b events-3933 /api/v1/namespaces/events-3933/pods/send-events-41c63fe6-26ef-4446-8bad-95981d174e1b 32b39f83-d789-48ab-ac9a-82437468faf0 285010 0 2020-09-14 13:25:13 +0000 UTC map[name:foo time:793817657] map[] [] [] [{e2e.test Update v1 2020-09-14 13:25:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-14 13:25:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l27rk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l27rk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l27rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:25:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:25:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-14 13:25:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.96,StartTime:2020-09-14 13:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-14 13:25:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://29b0d05314d017182409b87bba25399e887bf3ea5bb69ec2e87d8f59bb763fb6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 14 13:25:24.275: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 14 13:25:26.281: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:25:26.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3933" for this suite. • [SLOW TEST:12.742 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":299,"skipped":4864,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:25:26.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 14 13:25:26.897: INFO: Creating ReplicaSet my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2 Sep 14 13:25:27.118: INFO: Pod name my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2: Found 0 pods out of 1 Sep 14 13:25:33.675: INFO: Pod name my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2: Found 1 pods out of 1 Sep 14 13:25:33.675: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2" is running Sep 14 13:25:35.709: INFO: Pod "my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2-2p5nz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 13:25:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 13:25:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 13:25:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-14 13:25:27 +0000 UTC Reason: Message:}]) Sep 14 13:25:35.710: INFO: Trying to dial the pod Sep 14 13:25:40.723: INFO: Controller my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2: Got expected result from replica 1 [my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2-2p5nz]: "my-hostname-basic-9637f18f-549c-4bda-954a-f66f52fa14d2-2p5nz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:25:40.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2049" for this suite. • [SLOW TEST:14.241 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":300,"skipped":4882,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:25:40.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0914 13:25:53.661136 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 14 13:26:55.678: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 14 13:26:55.678: INFO: Deleting pod "simpletest-rc-to-be-deleted-2xtzd" in namespace "gc-9553" Sep 14 13:26:55.778: INFO: Deleting pod "simpletest-rc-to-be-deleted-6qvbz" in namespace "gc-9553" Sep 14 13:26:56.180: INFO: Deleting pod "simpletest-rc-to-be-deleted-8bprw" in namespace "gc-9553" Sep 14 13:26:57.191: INFO: Deleting pod "simpletest-rc-to-be-deleted-94wfs" in namespace "gc-9553" Sep 14 13:26:57.656: INFO: Deleting pod "simpletest-rc-to-be-deleted-g8blj" in namespace "gc-9553" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:26:57.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9553" for this suite. • [SLOW TEST:77.507 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":301,"skipped":4889,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:26:58.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 14 13:26:58.720: INFO: Waiting up to 5m0s for pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a" in namespace "var-expansion-732" to be "Succeeded or Failed" Sep 14 13:26:59.009: INFO: Pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 288.447696ms Sep 14 13:27:01.436: INFO: Pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715642454s Sep 14 13:27:03.520: INFO: Pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.799730808s Sep 14 13:27:05.535: INFO: Pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.814301516s STEP: Saw pod success Sep 14 13:27:05.535: INFO: Pod "var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a" satisfied condition "Succeeded or Failed" Sep 14 13:27:05.553: INFO: Trying to get logs from node latest-worker2 pod var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a container dapi-container: STEP: delete the pod Sep 14 13:27:05.916: INFO: Waiting for pod var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a to disappear Sep 14 13:27:05.927: INFO: Pod var-expansion-0c372900-723e-4785-95c6-531e9a6f8d2a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:27:05.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-732" for this suite. • [SLOW TEST:7.743 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":302,"skipped":4906,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 14 13:27:05.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 14 13:27:23.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-946" for this suite. • [SLOW TEST:17.477 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.1-rc.0.37+ccc0405f3c92f5/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":303,"skipped":4910,"failed":0} SSSSSSSSSSSSSSSSSSSSep 14 13:27:23.461: INFO: Running AfterSuite actions on all nodes Sep 14 13:27:23.461: INFO: Running AfterSuite actions on node 1 Sep 14 13:27:23.461: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 5938.412 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS