I0311 18:37:03.074891 12 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0311 18:37:03.075117 12 e2e.go:124] Starting e2e run "37568227-9d5e-4de7-bc87-756a6f76894b" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1615487822 - Will randomize all specs Will run 275 of 4994 specs Mar 11 18:37:03.127: INFO: >>> kubeConfig: /root/.kube/config Mar 11 18:37:03.134: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 11 18:37:03.164: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 11 18:37:03.225: INFO: The status of Pod cmk-init-discover-node1-vk7wm is Succeeded, skipping waiting Mar 11 18:37:03.225: INFO: The status of Pod cmk-init-discover-node2-29mrv is Succeeded, skipping waiting Mar 11 18:37:03.225: INFO: 40 / 45 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 11 18:37:03.225: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Mar 11 18:37:03.225: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 11 18:37:03.245: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Mar 11 18:37:03.245: INFO: e2e test version: v1.18.16 Mar 11 18:37:03.246: INFO: kube-apiserver version: v1.18.8 Mar 11 18:37:03.246: INFO: >>> kubeConfig: /root/.kube/config Mar 11 18:37:03.253: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:37:03.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Mar 11 18:37:03.275: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Mar 11 18:37:03.284: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-853 STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:37:03.390: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:04.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-853" for this suite. • [SLOW TEST:61.415 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:04.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-871 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-871.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-871.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-871.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-871.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-871.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 109.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.109_udp@PTR;check="$$(dig +tcp +noall +answer +search 109.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.109_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-871.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-871.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-871.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-871.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-871.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-871.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-871.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 109.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.109_udp@PTR;check="$$(dig +tcp +noall +answer +search 109.43.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.43.109_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 18:38:14.837: INFO: Unable to read wheezy_udp@dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.839: INFO: Unable to read wheezy_tcp@dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.842: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.862: INFO: Unable to read jessie_udp@dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.865: INFO: Unable to read jessie_tcp@dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local from pod dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec: the server could not find the requested resource (get pods dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec) Mar 11 18:38:14.886: INFO: Lookups using dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec failed for: [wheezy_udp@dns-test-service.dns-871.svc.cluster.local wheezy_tcp@dns-test-service.dns-871.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local jessie_udp@dns-test-service.dns-871.svc.cluster.local jessie_tcp@dns-test-service.dns-871.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-871.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-871.svc.cluster.local] Mar 11 18:38:19.941: INFO: DNS probes using dns-871/dns-test-e41e9d0f-0520-485f-8b80-3bedcf7f8fec succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:19.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-871" for this suite. • [SLOW TEST:15.304 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":2,"skipped":18,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:19.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8316 STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-49167b83-dfa3-463f-beb2-46117d034986 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-49167b83-dfa3-463f-beb2-46117d034986 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:26.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8316" for this suite. • [SLOW TEST:6.274 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":21,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:26.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-4275 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 11 18:38:26.382: INFO: Waiting up to 5m0s for pod "client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3" in namespace "containers-4275" to be "Succeeded or Failed" Mar 11 18:38:26.384: INFO: Pod "client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.717445ms Mar 11 18:38:28.388: INFO: Pod "client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006853622s Mar 11 18:38:30.392: INFO: Pod "client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010271959s STEP: Saw pod success Mar 11 18:38:30.392: INFO: Pod "client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3" satisfied condition "Succeeded or Failed" Mar 11 18:38:30.394: INFO: Trying to get logs from node node1 pod client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3 container test-container: STEP: delete the pod Mar 11 18:38:30.406: INFO: Waiting for pod client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3 to disappear Mar 11 18:38:30.408: INFO: Pod client-containers-a4995c84-d4ec-4e0c-9f04-bae21021baa3 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:30.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4275" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:30.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-5734 STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:34.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5734" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":5,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:34.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5413 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 18:38:35.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 18:38:37.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084715, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084715, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084715, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084715, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 18:38:40.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 11 18:38:40.327: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5413" for this suite. STEP: Destroying namespace "webhook-5413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.775 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":6,"skipped":73,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:40.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-5638 STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 11 18:38:45.534: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:38:45.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5638" for this suite. • [SLOW TEST:5.178 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":82,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:38:45.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-7287 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 11 18:38:53.730: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:38:53.733: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:38:55.734: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:38:55.738: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:38:57.733: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:38:57.736: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:38:59.734: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:38:59.737: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:39:01.733: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:39:01.737: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:39:03.737: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:39:03.740: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:39:05.733: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:39:05.737: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 18:39:07.734: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 18:39:07.737: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:39:07.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7287" for this suite. • [SLOW TEST:22.204 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:39:07.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2886 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-bdf6205d-53fd-4177-b285-aabb4ed181d6 STEP: Creating a pod to test consume secrets Mar 11 18:39:07.896: INFO: Waiting up to 5m0s for pod "pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99" in namespace "secrets-2886" to be "Succeeded or Failed" Mar 11 18:39:07.898: INFO: Pod "pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196088ms Mar 11 18:39:09.901: INFO: Pod "pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004930284s Mar 11 18:39:11.906: INFO: Pod "pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009692532s STEP: Saw pod success Mar 11 18:39:11.906: INFO: Pod "pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99" satisfied condition "Succeeded or Failed" Mar 11 18:39:11.908: INFO: Trying to get logs from node node2 pod pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99 container secret-volume-test: STEP: delete the pod Mar 11 18:39:11.928: INFO: Waiting for pod pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99 to disappear Mar 11 18:39:11.930: INFO: Pod pod-secrets-290e6b8d-6e4e-4bd2-8dd3-4217f8e9de99 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:39:11.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2886" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:39:11.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2905 STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-f92b90dc-e438-4604-8c04-0ddbc0f2d034 STEP: Creating secret with name s-test-opt-upd-68d03cbf-2b64-4882-988b-1023cda56f49 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f92b90dc-e438-4604-8c04-0ddbc0f2d034 STEP: Updating secret s-test-opt-upd-68d03cbf-2b64-4882-988b-1023cda56f49 STEP: Creating secret with name s-test-opt-create-5a838b5a-0bd1-495e-ba4f-2b62e9fd0461 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:40:32.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2905" for this suite. • [SLOW TEST:81.001 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:40:32.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6300 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 11 18:40:33.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7" in namespace "downward-api-6300" to be "Succeeded or Failed" Mar 11 18:40:33.080: INFO: Pod "downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278064ms Mar 11 18:40:35.084: INFO: Pod "downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0068601s Mar 11 18:40:37.090: INFO: Pod "downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012324328s STEP: Saw pod success Mar 11 18:40:37.090: INFO: Pod "downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7" satisfied condition "Succeeded or Failed" Mar 11 18:40:37.092: INFO: Trying to get logs from node node2 pod downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7 container client-container: STEP: delete the pod Mar 11 18:40:37.107: INFO: Waiting for pod downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7 to disappear Mar 11 18:40:37.109: INFO: Pod downwardapi-volume-425760e4-57d7-4fdf-a8a7-f2efe210e8d7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:40:37.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6300" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":154,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:40:37.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4002 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-hn8n STEP: Creating a pod to test atomic-volume-subpath Mar 11 18:40:37.256: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hn8n" in namespace "subpath-4002" to be "Succeeded or Failed" Mar 11 18:40:37.259: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.36838ms Mar 11 18:40:39.264: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007647173s Mar 11 18:40:41.267: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 4.010667994s Mar 11 18:40:43.270: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 6.013734537s Mar 11 18:40:45.273: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 8.017054208s Mar 11 18:40:47.276: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 10.020178392s Mar 11 18:40:49.280: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 12.023343596s Mar 11 18:40:51.285: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 14.028398385s Mar 11 18:40:53.292: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 16.035270606s Mar 11 18:40:55.295: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 18.038608283s Mar 11 18:40:57.300: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 20.044177854s Mar 11 18:40:59.305: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Running", Reason="", readiness=true. Elapsed: 22.048310336s Mar 11 18:41:01.310: INFO: Pod "pod-subpath-test-configmap-hn8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053802758s STEP: Saw pod success Mar 11 18:41:01.310: INFO: Pod "pod-subpath-test-configmap-hn8n" satisfied condition "Succeeded or Failed" Mar 11 18:41:01.312: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-hn8n container test-container-subpath-configmap-hn8n: STEP: delete the pod Mar 11 18:41:01.326: INFO: Waiting for pod pod-subpath-test-configmap-hn8n to disappear Mar 11 18:41:01.328: INFO: Pod pod-subpath-test-configmap-hn8n no longer exists STEP: Deleting pod pod-subpath-test-configmap-hn8n Mar 11 18:41:01.328: INFO: Deleting pod "pod-subpath-test-configmap-hn8n" in namespace "subpath-4002" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:41:01.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4002" for this suite. • [SLOW TEST:24.221 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":12,"skipped":158,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:41:01.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-5322 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5322, will wait for the garbage collector to delete the pods Mar 11 18:41:07.527: INFO: Deleting Job.batch foo took: 5.241121ms Mar 11 18:41:08.127: INFO: Terminating Job.batch foo pods took: 600.28012ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:41:46.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5322" for this suite. • [SLOW TEST:45.101 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":13,"skipped":168,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:41:46.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2629 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 11 18:41:46.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2629' Mar 11 18:41:46.822: INFO: stderr: "" Mar 11 18:41:46.822: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 11 18:41:56.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2629 -o json' Mar 11 18:41:57.037: INFO: stderr: "" Mar 11 18:41:57.037: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.37\\\"\\n ],\\n \\\"mac\\\": \\\"46:5a:85:ee:d1:f5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.37\\\"\\n ],\\n \\\"mac\\\": \\\"46:5a:85:ee:d1:f5\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2021-03-11T18:41:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2021-03-11T18:41:46Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:k8s.v1.cni.cncf.io/network-status\": {},\n \"f:k8s.v1.cni.cncf.io/networks-status\": {}\n }\n }\n },\n \"manager\": \"multus\",\n \"operation\": \"Update\",\n \"time\": \"2021-03-11T18:41:48Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.4.37\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-03-11T18:41:54Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2629\",\n \"resourceVersion\": \"14900\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2629/pods/e2e-test-httpd-pod\",\n \"uid\": \"3720f920-c45d-4012-a174-9d19f2b41ae7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4rng7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4rng7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4rng7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-11T18:41:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-11T18:41:54Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-11T18:41:54Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-11T18:41:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://d0ee243ecfd8873ad78b44ec63c7b428641e8d442dcff5228da1670436990173\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-03-11T18:41:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.37\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.37\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-03-11T18:41:46Z\"\n }\n}\n" STEP: replace the image in the pod Mar 11 18:41:57.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2629' Mar 11 18:41:57.351: INFO: stderr: "" Mar 11 18:41:57.351: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 11 18:41:57.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2629' Mar 11 18:42:01.294: INFO: stderr: "" Mar 11 18:42:01.294: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:01.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2629" for this suite. • [SLOW TEST:14.862 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":14,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:01.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3259 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-3259/configmap-test-df32f224-2716-4b24-a99b-7d0850c0a18d STEP: Creating a pod to test consume configMaps Mar 11 18:42:01.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7" in namespace "configmap-3259" to be "Succeeded or Failed" Mar 11 18:42:01.442: INFO: Pod "pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664737ms Mar 11 18:42:03.447: INFO: Pod "pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006953358s Mar 11 18:42:05.451: INFO: Pod "pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011151909s STEP: Saw pod success Mar 11 18:42:05.451: INFO: Pod "pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7" satisfied condition "Succeeded or Failed" Mar 11 18:42:05.453: INFO: Trying to get logs from node node2 pod pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7 container env-test: STEP: delete the pod Mar 11 18:42:05.466: INFO: Waiting for pod pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7 to disappear Mar 11 18:42:05.468: INFO: Pod pod-configmaps-3d28a3e6-215a-43ac-bb3d-73ef1d3f74e7 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:05.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3259" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":208,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:05.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3655 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-hh5s STEP: Creating a pod to test atomic-volume-subpath Mar 11 18:42:05.616: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hh5s" in namespace "subpath-3655" to be "Succeeded or Failed" Mar 11 18:42:05.619: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 1.97966ms Mar 11 18:42:07.622: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005675552s Mar 11 18:42:09.626: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 4.009242824s Mar 11 18:42:11.629: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 6.011987308s Mar 11 18:42:13.631: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 8.014883197s Mar 11 18:42:15.635: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 10.018628344s Mar 11 18:42:17.639: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 12.022146195s Mar 11 18:42:19.642: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 14.025909723s Mar 11 18:42:21.646: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 16.029883278s Mar 11 18:42:23.649: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 18.032889371s Mar 11 18:42:25.654: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 20.037275535s Mar 11 18:42:27.659: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Running", Reason="", readiness=true. Elapsed: 22.042445278s Mar 11 18:42:29.663: INFO: Pod "pod-subpath-test-secret-hh5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.046061033s STEP: Saw pod success Mar 11 18:42:29.663: INFO: Pod "pod-subpath-test-secret-hh5s" satisfied condition "Succeeded or Failed" Mar 11 18:42:29.665: INFO: Trying to get logs from node node1 pod pod-subpath-test-secret-hh5s container test-container-subpath-secret-hh5s: STEP: delete the pod Mar 11 18:42:29.678: INFO: Waiting for pod pod-subpath-test-secret-hh5s to disappear Mar 11 18:42:29.680: INFO: Pod pod-subpath-test-secret-hh5s no longer exists STEP: Deleting pod pod-subpath-test-secret-hh5s Mar 11 18:42:29.681: INFO: Deleting pod "pod-subpath-test-secret-hh5s" in namespace "subpath-3655" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:29.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3655" for this suite. • [SLOW TEST:24.217 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":16,"skipped":208,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:29.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-324 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 18:42:30.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 18:42:32.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084950, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084950, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084950, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084950, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 18:42:35.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:35.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-324" for this suite. STEP: Destroying namespace "webhook-324-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.684 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":17,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:35.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-80 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-aa398d2b-88ae-40e4-999d-32b91d440cdb STEP: Creating a pod to test consume secrets Mar 11 18:42:35.514: INFO: Waiting up to 5m0s for pod "pod-secrets-ec989d05-8340-4053-83bc-d3765af75226" in namespace "secrets-80" to be "Succeeded or Failed" Mar 11 18:42:35.517: INFO: Pod "pod-secrets-ec989d05-8340-4053-83bc-d3765af75226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251117ms Mar 11 18:42:37.519: INFO: Pod "pod-secrets-ec989d05-8340-4053-83bc-d3765af75226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004772066s Mar 11 18:42:39.524: INFO: Pod "pod-secrets-ec989d05-8340-4053-83bc-d3765af75226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009470567s STEP: Saw pod success Mar 11 18:42:39.524: INFO: Pod "pod-secrets-ec989d05-8340-4053-83bc-d3765af75226" satisfied condition "Succeeded or Failed" Mar 11 18:42:39.526: INFO: Trying to get logs from node node1 pod pod-secrets-ec989d05-8340-4053-83bc-d3765af75226 container secret-volume-test: STEP: delete the pod Mar 11 18:42:39.538: INFO: Waiting for pod pod-secrets-ec989d05-8340-4053-83bc-d3765af75226 to disappear Mar 11 18:42:39.540: INFO: Pod pod-secrets-ec989d05-8340-4053-83bc-d3765af75226 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:39.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-80" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":231,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:39.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9801 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:42:39.669: INFO: Creating deployment "test-recreate-deployment" Mar 11 18:42:39.673: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 11 18:42:39.678: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 11 18:42:41.684: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 11 18:42:41.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084959, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084959, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084959, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751084959, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 18:42:43.690: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 11 18:42:43.695: INFO: Updating deployment test-recreate-deployment Mar 11 18:42:43.695: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 11 18:42:43.736: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9801 /apis/apps/v1/namespaces/deployment-9801/deployments/test-recreate-deployment 5c4503f4-f0cf-436e-aed5-1f8ca4b7bc2e 15322 2 2021-03-11 18:42:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008d00c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-11 18:42:43 +0000 UTC,LastTransitionTime:2021-03-11 18:42:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2021-03-11 18:42:43 +0000 UTC,LastTransitionTime:2021-03-11 18:42:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 11 18:42:43.739: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-9801 /apis/apps/v1/namespaces/deployment-9801/replicasets/test-recreate-deployment-d5667d9c7 594ee06b-4218-4b20-854a-4034a14b3b03 15321 1 2021-03-11 18:42:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5c4503f4-f0cf-436e-aed5-1f8ca4b7bc2e 0xc0008d0ef0 0xc0008d0ef1}] [] [{kube-controller-manager Update apps/v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 52 53 48 51 102 52 45 102 48 99 102 45 52 51 54 101 45 97 101 100 53 45 49 102 56 99 97 52 98 55 98 99 50 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008d0f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 18:42:43.739: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 11 18:42:43.739: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-9801 /apis/apps/v1/namespaces/deployment-9801/replicasets/test-recreate-deployment-74d98b5f7c aca995e8-f3f9-420f-9158-02a872e9b5f3 15311 2 2021-03-11 18:42:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5c4503f4-f0cf-436e-aed5-1f8ca4b7bc2e 0xc0008d0df7 0xc0008d0df8}] [] [{kube-controller-manager Update apps/v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 52 53 48 51 102 52 45 102 48 99 102 45 52 51 54 101 45 97 101 100 53 45 49 102 56 99 97 52 98 55 98 99 50 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008d0e88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 11 18:42:43.743: INFO: Pod "test-recreate-deployment-d5667d9c7-q5h7l" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-q5h7l test-recreate-deployment-d5667d9c7- deployment-9801 /api/v1/namespaces/deployment-9801/pods/test-recreate-deployment-d5667d9c7-q5h7l 6646f559-f44c-4a81-9c2f-267212104319 15323 0 2021-03-11 18:42:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 594ee06b-4218-4b20-854a-4034a14b3b03 0xc0008d1a7f 0xc0008d1aa0}] [] [{kube-controller-manager Update v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 52 101 101 48 54 98 45 52 50 49 56 45 52 98 50 48 45 56 53 52 97 45 52 48 51 52 97 49 52 98 51 98 48 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 18:42:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xfffk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xfffk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xfffk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 18:42:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 18:42:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 18:42:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 18:42:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-03-11 18:42:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:43.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9801" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":19,"skipped":234,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:43.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7496 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 11 18:42:43.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57" in namespace "downward-api-7496" to be "Succeeded or Failed" Mar 11 18:42:43.885: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Pending", Reason="", readiness=false. Elapsed: 1.95582ms Mar 11 18:42:45.889: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006518999s Mar 11 18:42:47.892: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009425668s Mar 11 18:42:49.896: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01358431s Mar 11 18:42:51.900: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017465412s Mar 11 18:42:53.904: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021107351s STEP: Saw pod success Mar 11 18:42:53.904: INFO: Pod "downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57" satisfied condition "Succeeded or Failed" Mar 11 18:42:53.907: INFO: Trying to get logs from node node1 pod downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57 container client-container: STEP: delete the pod Mar 11 18:42:53.922: INFO: Waiting for pod downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57 to disappear Mar 11 18:42:53.924: INFO: Pod downwardapi-volume-f789ba5b-67f8-4e40-ab4a-d623555fda57 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:53.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7496" for this suite. • [SLOW TEST:10.181 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:53.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3172 STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:42:54.058: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 11 18:42:56.086: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:42:57.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3172" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":21,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:42:57.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1418 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-34b71054-a061-4767-ae07-ee1cc1594fbc STEP: Creating a pod to test consume secrets Mar 11 18:42:57.236: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02" in namespace "projected-1418" to be "Succeeded or Failed" Mar 11 18:42:57.240: INFO: Pod "pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02": Phase="Pending", Reason="", readiness=false. Elapsed: 3.933706ms Mar 11 18:42:59.244: INFO: Pod "pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007534068s Mar 11 18:43:01.247: INFO: Pod "pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010316084s STEP: Saw pod success Mar 11 18:43:01.247: INFO: Pod "pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02" satisfied condition "Succeeded or Failed" Mar 11 18:43:01.250: INFO: Trying to get logs from node node2 pod pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02 container projected-secret-volume-test: STEP: delete the pod Mar 11 18:43:01.264: INFO: Waiting for pod pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02 to disappear Mar 11 18:43:01.266: INFO: Pod pod-projected-secrets-66990638-891b-4563-b0a9-0c2935770f02 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:43:01.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1418" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":304,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:43:01.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7036 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 11 18:43:05.937: INFO: Successfully updated pod "labelsupdate24407535-2f4b-4fa8-97e0-995aa9b167aa" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:43:07.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7036" for this suite. • [SLOW TEST:6.683 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":311,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:43:07.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5009 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-27690aa1-e3ef-444d-8d08-06df3588d2ac in namespace container-probe-5009 Mar 11 18:43:14.102: INFO: Started pod busybox-27690aa1-e3ef-444d-8d08-06df3588d2ac in namespace container-probe-5009 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 18:43:14.105: INFO: Initial restart count of pod busybox-27690aa1-e3ef-444d-8d08-06df3588d2ac is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:14.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5009" for this suite. • [SLOW TEST:246.691 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":324,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:14.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4319 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 11 18:47:14.774: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 18:47:14.787: INFO: Waiting for terminating namespaces to be deleted... Mar 11 18:47:14.790: INFO: Logging pods the kubelet thinks is on node node1 before test Mar 11 18:47:14.810: INFO: prometheus-k8s-0 from monitoring started at 2021-03-11 18:04:37 +0000 UTC (5 container statuses recorded) Mar 11 18:47:14.810: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container grafana ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container prometheus ready: true, restart count 1 Mar 11 18:47:14.810: INFO: Container prometheus-config-reloader ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container rules-configmap-reloader ready: true, restart count 0 Mar 11 18:47:14.810: INFO: kube-multus-ds-amd64-gtmmz from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container kube-multus ready: true, restart count 1 Mar 11 18:47:14.810: INFO: node-feature-discovery-worker-nf56t from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container nfd-worker ready: true, restart count 0 Mar 11 18:47:14.810: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container kube-sriovdp ready: true, restart count 0 Mar 11 18:47:14.810: INFO: cmk-webhook-888945845-2gpfq from kube-system started at 2021-03-11 18:03:34 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container cmk-webhook ready: true, restart count 0 Mar 11 18:47:14.810: INFO: node-exporter-mw629 from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.810: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container node-exporter ready: true, restart count 0 Mar 11 18:47:14.810: INFO: kube-proxy-5zz5g from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container kube-proxy ready: true, restart count 2 Mar 11 18:47:14.810: INFO: kube-flannel-8pz9c from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container kube-flannel ready: true, restart count 2 Mar 11 18:47:14.810: INFO: cmk-init-discover-node2-29mrv from kube-system started at 2021-03-11 18:03:13 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.810: INFO: Container discover ready: false, restart count 0 Mar 11 18:47:14.810: INFO: Container init ready: false, restart count 0 Mar 11 18:47:14.810: INFO: Container install ready: false, restart count 0 Mar 11 18:47:14.810: INFO: collectd-4rvsd from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.810: INFO: Container collectd ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container collectd-exporter ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container rbac-proxy ready: true, restart count 0 Mar 11 18:47:14.810: INFO: nginx-proxy-node1 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.810: INFO: Container nginx-proxy ready: true, restart count 2 Mar 11 18:47:14.810: INFO: cmk-s6v97 from kube-system started at 2021-03-11 18:03:34 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.810: INFO: Container nodereport ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Container reconcile ready: true, restart count 0 Mar 11 18:47:14.810: INFO: Logging pods the kubelet thinks is on node node2 before test Mar 11 18:47:14.831: INFO: kube-flannel-8wwvj from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.831: INFO: Container kube-flannel ready: true, restart count 2 Mar 11 18:47:14.831: INFO: cmk-slzjv from kube-system started at 2021-03-11 18:03:33 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.831: INFO: Container nodereport ready: true, restart count 0 Mar 11 18:47:14.831: INFO: Container reconcile ready: true, restart count 0 Mar 11 18:47:14.831: INFO: node-exporter-x6vqx from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.831: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:47:14.831: INFO: Container node-exporter ready: true, restart count 0 Mar 11 18:47:14.831: INFO: collectd-86ww6 from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.831: INFO: Container collectd ready: true, restart count 0 Mar 11 18:47:14.831: INFO: Container collectd-exporter ready: true, restart count 0 Mar 11 18:47:14.831: INFO: Container rbac-proxy ready: true, restart count 0 Mar 11 18:47:14.831: INFO: kubernetes-dashboard-57777fbdcb-zsnff from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.831: INFO: Container kubernetes-dashboard ready: true, restart count 1 Mar 11 18:47:14.831: INFO: node-feature-discovery-worker-8xdg7 from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.831: INFO: Container nfd-worker ready: true, restart count 0 Mar 11 18:47:14.831: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.831: INFO: Container kube-sriovdp ready: true, restart count 0 Mar 11 18:47:14.831: INFO: cmk-init-discover-node2-9knwq from kube-system started at 2021-03-11 18:02:23 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.832: INFO: Container discover ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container init ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container install ready: false, restart count 0 Mar 11 18:47:14.832: INFO: kube-multus-ds-amd64-rpm89 from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.832: INFO: Container kube-multus ready: true, restart count 1 Mar 11 18:47:14.832: INFO: cmk-init-discover-node1-vk7wm from kube-system started at 2021-03-11 18:01:40 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.832: INFO: Container discover ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container init ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container install ready: false, restart count 0 Mar 11 18:47:14.832: INFO: prometheus-operator-f66f5fb4d-f2pkm from monitoring started at 2021-03-11 18:04:21 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.832: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:47:14.832: INFO: Container prometheus-operator ready: true, restart count 0 Mar 11 18:47:14.832: INFO: nginx-proxy-node2 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.832: INFO: Container nginx-proxy ready: true, restart count 2 Mar 11 18:47:14.832: INFO: kube-proxy-znx8n from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.832: INFO: Container kube-proxy ready: true, restart count 1 Mar 11 18:47:14.832: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded) Mar 11 18:47:14.832: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Mar 11 18:47:14.832: INFO: cmk-init-discover-node2-c5j6h from kube-system started at 2021-03-11 18:02:02 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.832: INFO: Container discover ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container init ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container install ready: false, restart count 0 Mar 11 18:47:14.832: INFO: cmk-init-discover-node2-qbc6m from kube-system started at 2021-03-11 18:02:53 +0000 UTC (3 container statuses recorded) Mar 11 18:47:14.832: INFO: Container discover ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container init ready: false, restart count 0 Mar 11 18:47:14.832: INFO: Container install ready: false, restart count 0 Mar 11 18:47:14.832: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz from monitoring started at 2021-03-11 18:07:22 +0000 UTC (2 container statuses recorded) Mar 11 18:47:14.832: INFO: Container tas-controller ready: true, restart count 0 Mar 11 18:47:14.832: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.166b5e2c1a5ac059], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.166b5e2c1aa2f456], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:15.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4319" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":25,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:15.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3408 STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:32.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3408" for this suite. • [SLOW TEST:16.210 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":26,"skipped":370,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:32.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-1441 STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 11 18:47:32.226: INFO: Created pod &Pod{ObjectMeta:{dns-1441 dns-1441 /api/v1/namespaces/dns-1441/pods/dns-1441 97ed8512-2bed-4417-a80f-c888062060bd 16642 0 2021-03-11 18:47:32 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2021-03-11 18:47:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkqnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkqnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkqnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 11 18:47:32.230: INFO: The status of Pod dns-1441 is Pending, waiting for it to be Running (with Ready = true) Mar 11 18:47:34.233: INFO: The status of Pod dns-1441 is Pending, waiting for it to be Running (with Ready = true) Mar 11 18:47:36.233: INFO: The status of Pod dns-1441 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 11 18:47:36.233: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1441 PodName:dns-1441 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 18:47:36.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Mar 11 18:47:36.349: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1441 PodName:dns-1441 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 18:47:36.349: INFO: >>> kubeConfig: /root/.kube/config Mar 11 18:47:36.476: INFO: Deleting pod dns-1441... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:36.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1441" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":27,"skipped":377,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:36.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7810 STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-7323 STEP: Creating secret with name secret-test-b1ae3fb6-b187-4587-a339-cdbb92e62519 STEP: Creating a pod to test consume secrets Mar 11 18:47:36.748: INFO: Waiting up to 5m0s for pod "pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50" in namespace "secrets-7810" to be "Succeeded or Failed" Mar 11 18:47:36.750: INFO: Pod "pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188266ms Mar 11 18:47:38.755: INFO: Pod "pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007431486s Mar 11 18:47:40.760: INFO: Pod "pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012779283s STEP: Saw pod success Mar 11 18:47:40.761: INFO: Pod "pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50" satisfied condition "Succeeded or Failed" Mar 11 18:47:40.763: INFO: Trying to get logs from node node1 pod pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50 container secret-volume-test: STEP: delete the pod Mar 11 18:47:40.778: INFO: Waiting for pod pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50 to disappear Mar 11 18:47:40.781: INFO: Pod pod-secrets-3fb85272-84a7-4f1d-8bbe-191442deba50 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:40.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7810" for this suite. STEP: Destroying namespace "secret-namespace-7323" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":380,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:40.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3608 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-4c598346-2d60-418e-a113-40ba86f4be60 STEP: Creating a pod to test consume secrets Mar 11 18:47:40.932: INFO: Waiting up to 5m0s for pod "pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16" in namespace "secrets-3608" to be "Succeeded or Failed" Mar 11 18:47:40.934: INFO: Pod "pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16": Phase="Pending", Reason="", readiness=false. Elapsed: 1.965991ms Mar 11 18:47:42.937: INFO: Pod "pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004972145s Mar 11 18:47:44.944: INFO: Pod "pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011112708s STEP: Saw pod success Mar 11 18:47:44.944: INFO: Pod "pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16" satisfied condition "Succeeded or Failed" Mar 11 18:47:44.946: INFO: Trying to get logs from node node1 pod pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16 container secret-volume-test: STEP: delete the pod Mar 11 18:47:44.957: INFO: Waiting for pod pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16 to disappear Mar 11 18:47:44.960: INFO: Pod pod-secrets-f6d26e4f-f040-4226-816c-715bdad5ad16 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:44.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3608" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":381,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:44.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7938 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 11 18:47:45.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5" in namespace "downward-api-7938" to be "Succeeded or Failed" Mar 11 18:47:45.102: INFO: Pod "downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.875258ms Mar 11 18:47:47.106: INFO: Pod "downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005921064s Mar 11 18:47:49.109: INFO: Pod "downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009501135s STEP: Saw pod success Mar 11 18:47:49.109: INFO: Pod "downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5" satisfied condition "Succeeded or Failed" Mar 11 18:47:49.112: INFO: Trying to get logs from node node1 pod downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5 container client-container: STEP: delete the pod Mar 11 18:47:49.126: INFO: Waiting for pod downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5 to disappear Mar 11 18:47:49.128: INFO: Pod downwardapi-volume-7dc1a346-9be8-41ec-b1fe-2bba60e5abc5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:47:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7938" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":389,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:47:49.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-1692 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1692 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1692 STEP: Creating statefulset with conflicting port in namespace statefulset-1692 STEP: Waiting until pod test-pod will start running in namespace statefulset-1692 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1692 Mar 11 18:47:55.289: INFO: Observed stateful pod in namespace: statefulset-1692, name: ss-0, uid: 48d6ff98-14b5-4fdd-bf40-2321956ca7b3, status phase: Pending. Waiting for statefulset controller to delete. Mar 11 18:47:55.343: INFO: Observed stateful pod in namespace: statefulset-1692, name: ss-0, uid: 48d6ff98-14b5-4fdd-bf40-2321956ca7b3, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 18:47:55.347: INFO: Observed stateful pod in namespace: statefulset-1692, name: ss-0, uid: 48d6ff98-14b5-4fdd-bf40-2321956ca7b3, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 18:47:55.349: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1692 STEP: Removing pod with conflicting port in namespace statefulset-1692 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1692 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 11 18:48:03.374: INFO: Deleting all statefulset in ns statefulset-1692 Mar 11 18:48:03.377: INFO: Scaling statefulset ss to 0 Mar 11 18:48:13.396: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 18:48:13.399: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:48:13.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1692" for this suite. • [SLOW TEST:24.281 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":31,"skipped":393,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:48:13.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-1883 STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 18:48:19.588: INFO: DNS probes using dns-1883/dns-test-b7be4821-827a-42dc-b55a-77dbbfb5f9fa succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:48:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1883" for this suite. • [SLOW TEST:6.186 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":32,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:48:19.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5197 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 18:48:20.305: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 18:48:22.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085300, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085300, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085300, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085300, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 18:48:25.325: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:48:25.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5197" for this suite. STEP: Destroying namespace "webhook-5197-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.859 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":33,"skipped":414,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:48:25.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-3444 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 11 18:48:25.585: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:48:32.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3444" for this suite. • [SLOW TEST:6.694 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":34,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:48:32.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3826 STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 11 18:48:32.282: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:48:52.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3826" for this suite. • [SLOW TEST:20.646 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":35,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:48:52.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6318 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:49:03.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6318" for this suite. • [SLOW TEST:11.167 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":36,"skipped":460,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:49:03.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8889 STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 11 18:49:04.100: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 11 18:49:09.103: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:49:10.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8889" for this suite. • [SLOW TEST:6.150 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":37,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:49:10.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2063 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2063 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2063 I0311 18:49:10.261406 12 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2063, replica count: 2 I0311 18:49:13.319244 12 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 18:49:13.319: INFO: Creating new exec pod Mar 11 18:49:18.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2063 execpodwg5tz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 11 18:49:18.599: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Mar 11 18:49:18.599: INFO: stdout: "" Mar 11 18:49:18.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2063 execpodwg5tz -- /bin/sh -x -c nc -zv -t -w 2 10.233.45.77 80' Mar 11 18:49:18.830: INFO: stderr: "+ nc -zv -t -w 2 10.233.45.77 80\nConnection to 10.233.45.77 80 port [tcp/http] succeeded!\n" Mar 11 18:49:18.830: INFO: stdout: "" Mar 11 18:49:18.830: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:49:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2063" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.729 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":38,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:49:18.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-5733 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:49:18.976: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5733 I0311 18:49:18.994367 12 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5733, replica count: 1 I0311 18:49:20.045200 12 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 18:49:21.046673 12 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 18:49:22.047032 12 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 18:49:23.047324 12 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 18:49:23.154: INFO: Created: latency-svc-b46pm Mar 11 18:49:23.160: INFO: Got endpoints: latency-svc-b46pm [13.024735ms] Mar 11 18:49:23.168: INFO: Created: latency-svc-g4hfd Mar 11 18:49:23.168: INFO: Got endpoints: latency-svc-g4hfd [7.762411ms] Mar 11 18:49:23.170: INFO: Created: latency-svc-bk4p5 Mar 11 18:49:23.172: INFO: Got endpoints: latency-svc-bk4p5 [10.844015ms] Mar 11 18:49:23.175: INFO: Created: latency-svc-hm8cp Mar 11 18:49:23.175: INFO: Got endpoints: latency-svc-hm8cp [13.249329ms] Mar 11 18:49:23.175: INFO: Created: latency-svc-94m7v Mar 11 18:49:23.177: INFO: Got endpoints: latency-svc-94m7v [15.773499ms] Mar 11 18:49:23.178: INFO: Created: latency-svc-9pcbm Mar 11 18:49:23.180: INFO: Got endpoints: latency-svc-9pcbm [19.318785ms] Mar 11 18:49:23.181: INFO: Created: latency-svc-lgrtz Mar 11 18:49:23.183: INFO: Got endpoints: latency-svc-lgrtz [21.603323ms] Mar 11 18:49:23.184: INFO: Created: latency-svc-2rlvd Mar 11 18:49:23.186: INFO: Got endpoints: latency-svc-2rlvd [24.859706ms] Mar 11 18:49:23.187: INFO: Created: latency-svc-f56dc Mar 11 18:49:23.189: INFO: Got endpoints: latency-svc-f56dc [27.4645ms] Mar 11 18:49:23.189: INFO: Created: latency-svc-hbgs4 Mar 11 18:49:23.191: INFO: Got endpoints: latency-svc-hbgs4 [30.39723ms] Mar 11 18:49:23.192: INFO: Created: latency-svc-xln5r Mar 11 18:49:23.194: INFO: Got endpoints: latency-svc-xln5r [32.587266ms] Mar 11 18:49:23.195: INFO: Created: latency-svc-b99td Mar 11 18:49:23.197: INFO: Got endpoints: latency-svc-b99td [35.961447ms] Mar 11 18:49:23.198: INFO: Created: latency-svc-nb94k Mar 11 18:49:23.200: INFO: Got endpoints: latency-svc-nb94k [38.612168ms] Mar 11 18:49:23.201: INFO: Created: latency-svc-gfxzn Mar 11 18:49:23.203: INFO: Got endpoints: latency-svc-gfxzn [41.132003ms] Mar 11 18:49:23.203: INFO: Created: latency-svc-4dldd Mar 11 18:49:23.205: INFO: Got endpoints: latency-svc-4dldd [44.304561ms] Mar 11 18:49:23.205: INFO: Created: latency-svc-4qcv7 Mar 11 18:49:23.207: INFO: Got endpoints: latency-svc-4qcv7 [45.679386ms] Mar 11 18:49:23.211: INFO: Created: latency-svc-g6zdt Mar 11 18:49:23.212: INFO: Created: latency-svc-tlmq8 Mar 11 18:49:23.212: INFO: Got endpoints: latency-svc-g6zdt [43.349491ms] Mar 11 18:49:23.214: INFO: Got endpoints: latency-svc-tlmq8 [42.136225ms] Mar 11 18:49:23.215: INFO: Created: latency-svc-5tmgp Mar 11 18:49:23.217: INFO: Got endpoints: latency-svc-5tmgp [42.514223ms] Mar 11 18:49:23.218: INFO: Created: latency-svc-k54qj Mar 11 18:49:23.222: INFO: Created: latency-svc-bm6n2 Mar 11 18:49:23.222: INFO: Got endpoints: latency-svc-k54qj [44.516734ms] Mar 11 18:49:23.223: INFO: Got endpoints: latency-svc-bm6n2 [42.822296ms] Mar 11 18:49:23.225: INFO: Created: latency-svc-sflpn Mar 11 18:49:23.227: INFO: Got endpoints: latency-svc-sflpn [44.008646ms] Mar 11 18:49:23.227: INFO: Created: latency-svc-2n8zs Mar 11 18:49:23.229: INFO: Got endpoints: latency-svc-2n8zs [42.733149ms] Mar 11 18:49:23.230: INFO: Created: latency-svc-ngbvc Mar 11 18:49:23.232: INFO: Got endpoints: latency-svc-ngbvc [43.471753ms] Mar 11 18:49:23.233: INFO: Created: latency-svc-8mdl4 Mar 11 18:49:23.235: INFO: Got endpoints: latency-svc-8mdl4 [43.548625ms] Mar 11 18:49:23.236: INFO: Created: latency-svc-8ffmx Mar 11 18:49:23.238: INFO: Got endpoints: latency-svc-8ffmx [43.854947ms] Mar 11 18:49:23.239: INFO: Created: latency-svc-vn989 Mar 11 18:49:23.241: INFO: Got endpoints: latency-svc-vn989 [43.498662ms] Mar 11 18:49:23.241: INFO: Created: latency-svc-hfltn Mar 11 18:49:23.243: INFO: Got endpoints: latency-svc-hfltn [42.625314ms] Mar 11 18:49:23.244: INFO: Created: latency-svc-khtrs Mar 11 18:49:23.245: INFO: Got endpoints: latency-svc-khtrs [42.885686ms] Mar 11 18:49:23.247: INFO: Created: latency-svc-px4dg Mar 11 18:49:23.249: INFO: Got endpoints: latency-svc-px4dg [43.813987ms] Mar 11 18:49:23.249: INFO: Created: latency-svc-c2xf8 Mar 11 18:49:23.251: INFO: Got endpoints: latency-svc-c2xf8 [44.070404ms] Mar 11 18:49:23.252: INFO: Created: latency-svc-mc5kc Mar 11 18:49:23.254: INFO: Created: latency-svc-2rqhj Mar 11 18:49:23.257: INFO: Created: latency-svc-ggqsk Mar 11 18:49:23.257: INFO: Got endpoints: latency-svc-mc5kc [44.849898ms] Mar 11 18:49:23.259: INFO: Created: latency-svc-zvhb5 Mar 11 18:49:23.261: INFO: Created: latency-svc-qvzwh Mar 11 18:49:23.264: INFO: Created: latency-svc-mdr97 Mar 11 18:49:23.266: INFO: Created: latency-svc-rbrj8 Mar 11 18:49:23.268: INFO: Created: latency-svc-sxm5m Mar 11 18:49:23.271: INFO: Created: latency-svc-npsng Mar 11 18:49:23.273: INFO: Created: latency-svc-mvn27 Mar 11 18:49:23.276: INFO: Created: latency-svc-cxz4n Mar 11 18:49:23.279: INFO: Created: latency-svc-htn48 Mar 11 18:49:23.281: INFO: Created: latency-svc-6kvw8 Mar 11 18:49:23.284: INFO: Created: latency-svc-hqxn2 Mar 11 18:49:23.287: INFO: Created: latency-svc-w45d2 Mar 11 18:49:23.289: INFO: Created: latency-svc-2456p Mar 11 18:49:23.307: INFO: Got endpoints: latency-svc-2rqhj [92.786564ms] Mar 11 18:49:23.312: INFO: Created: latency-svc-56mb5 Mar 11 18:49:23.357: INFO: Got endpoints: latency-svc-ggqsk [139.51433ms] Mar 11 18:49:23.362: INFO: Created: latency-svc-2zknx Mar 11 18:49:23.407: INFO: Got endpoints: latency-svc-zvhb5 [185.131085ms] Mar 11 18:49:23.411: INFO: Created: latency-svc-bwh2r Mar 11 18:49:23.458: INFO: Got endpoints: latency-svc-qvzwh [234.730779ms] Mar 11 18:49:23.463: INFO: Created: latency-svc-zrxfh Mar 11 18:49:23.507: INFO: Got endpoints: latency-svc-mdr97 [279.765811ms] Mar 11 18:49:23.512: INFO: Created: latency-svc-mgjl7 Mar 11 18:49:23.557: INFO: Got endpoints: latency-svc-rbrj8 [328.0598ms] Mar 11 18:49:23.562: INFO: Created: latency-svc-8nb8n Mar 11 18:49:23.607: INFO: Got endpoints: latency-svc-sxm5m [375.296808ms] Mar 11 18:49:23.613: INFO: Created: latency-svc-jhnts Mar 11 18:49:23.657: INFO: Got endpoints: latency-svc-npsng [422.461289ms] Mar 11 18:49:23.663: INFO: Created: latency-svc-54ssk Mar 11 18:49:23.708: INFO: Got endpoints: latency-svc-mvn27 [470.013422ms] Mar 11 18:49:23.713: INFO: Created: latency-svc-6dt8x Mar 11 18:49:23.757: INFO: Got endpoints: latency-svc-cxz4n [516.132163ms] Mar 11 18:49:23.762: INFO: Created: latency-svc-rlh76 Mar 11 18:49:23.807: INFO: Got endpoints: latency-svc-htn48 [563.773878ms] Mar 11 18:49:23.813: INFO: Created: latency-svc-d6qgt Mar 11 18:49:23.857: INFO: Got endpoints: latency-svc-6kvw8 [611.383607ms] Mar 11 18:49:23.863: INFO: Created: latency-svc-t6bk9 Mar 11 18:49:23.907: INFO: Got endpoints: latency-svc-hqxn2 [658.551793ms] Mar 11 18:49:23.912: INFO: Created: latency-svc-7pd8m Mar 11 18:49:23.957: INFO: Got endpoints: latency-svc-w45d2 [705.307755ms] Mar 11 18:49:23.961: INFO: Created: latency-svc-87kjv Mar 11 18:49:24.009: INFO: Got endpoints: latency-svc-2456p [752.533948ms] Mar 11 18:49:24.021: INFO: Created: latency-svc-jnmkc Mar 11 18:49:24.056: INFO: Got endpoints: latency-svc-56mb5 [749.581729ms] Mar 11 18:49:24.061: INFO: Created: latency-svc-wqwmk Mar 11 18:49:24.107: INFO: Got endpoints: latency-svc-2zknx [750.391268ms] Mar 11 18:49:24.112: INFO: Created: latency-svc-cwvdf Mar 11 18:49:24.157: INFO: Got endpoints: latency-svc-bwh2r [750.513272ms] Mar 11 18:49:24.162: INFO: Created: latency-svc-p8754 Mar 11 18:49:24.207: INFO: Got endpoints: latency-svc-zrxfh [749.39521ms] Mar 11 18:49:24.213: INFO: Created: latency-svc-rbhpf Mar 11 18:49:24.257: INFO: Got endpoints: latency-svc-mgjl7 [750.390356ms] Mar 11 18:49:24.263: INFO: Created: latency-svc-prmcm Mar 11 18:49:24.307: INFO: Got endpoints: latency-svc-8nb8n [750.157918ms] Mar 11 18:49:24.313: INFO: Created: latency-svc-4trzf Mar 11 18:49:24.357: INFO: Got endpoints: latency-svc-jhnts [749.730037ms] Mar 11 18:49:24.362: INFO: Created: latency-svc-4c4sv Mar 11 18:49:24.407: INFO: Got endpoints: latency-svc-54ssk [749.654854ms] Mar 11 18:49:24.413: INFO: Created: latency-svc-7n6gn Mar 11 18:49:24.457: INFO: Got endpoints: latency-svc-6dt8x [749.134423ms] Mar 11 18:49:24.462: INFO: Created: latency-svc-nq6pn Mar 11 18:49:24.507: INFO: Got endpoints: latency-svc-rlh76 [750.027318ms] Mar 11 18:49:24.513: INFO: Created: latency-svc-7ln27 Mar 11 18:49:24.557: INFO: Got endpoints: latency-svc-d6qgt [749.567354ms] Mar 11 18:49:24.562: INFO: Created: latency-svc-vhh8n Mar 11 18:49:24.607: INFO: Got endpoints: latency-svc-t6bk9 [750.105613ms] Mar 11 18:49:24.612: INFO: Created: latency-svc-6sfpr Mar 11 18:49:24.658: INFO: Got endpoints: latency-svc-7pd8m [750.299559ms] Mar 11 18:49:24.663: INFO: Created: latency-svc-4czcf Mar 11 18:49:24.708: INFO: Got endpoints: latency-svc-87kjv [750.896004ms] Mar 11 18:49:24.713: INFO: Created: latency-svc-rjzng Mar 11 18:49:24.757: INFO: Got endpoints: latency-svc-jnmkc [747.950332ms] Mar 11 18:49:24.762: INFO: Created: latency-svc-shxbz Mar 11 18:49:24.807: INFO: Got endpoints: latency-svc-wqwmk [751.084696ms] Mar 11 18:49:24.813: INFO: Created: latency-svc-m6cwl Mar 11 18:49:24.858: INFO: Got endpoints: latency-svc-cwvdf [750.712013ms] Mar 11 18:49:24.864: INFO: Created: latency-svc-xgtpg Mar 11 18:49:24.908: INFO: Got endpoints: latency-svc-p8754 [750.293092ms] Mar 11 18:49:24.913: INFO: Created: latency-svc-gjrrb Mar 11 18:49:24.957: INFO: Got endpoints: latency-svc-rbhpf [750.022777ms] Mar 11 18:49:24.963: INFO: Created: latency-svc-hzgs2 Mar 11 18:49:25.007: INFO: Got endpoints: latency-svc-prmcm [749.572809ms] Mar 11 18:49:25.013: INFO: Created: latency-svc-6n55g Mar 11 18:49:25.058: INFO: Got endpoints: latency-svc-4trzf [750.407859ms] Mar 11 18:49:25.062: INFO: Created: latency-svc-qcpj5 Mar 11 18:49:25.107: INFO: Got endpoints: latency-svc-4c4sv [750.037045ms] Mar 11 18:49:25.113: INFO: Created: latency-svc-tjtkc Mar 11 18:49:25.158: INFO: Got endpoints: latency-svc-7n6gn [750.493176ms] Mar 11 18:49:25.164: INFO: Created: latency-svc-htdcz Mar 11 18:49:25.208: INFO: Got endpoints: latency-svc-nq6pn [750.826256ms] Mar 11 18:49:25.213: INFO: Created: latency-svc-66l7j Mar 11 18:49:25.257: INFO: Got endpoints: latency-svc-7ln27 [749.775932ms] Mar 11 18:49:25.263: INFO: Created: latency-svc-4cljm Mar 11 18:49:25.307: INFO: Got endpoints: latency-svc-vhh8n [750.201238ms] Mar 11 18:49:25.312: INFO: Created: latency-svc-cztz4 Mar 11 18:49:25.357: INFO: Got endpoints: latency-svc-6sfpr [749.974318ms] Mar 11 18:49:25.364: INFO: Created: latency-svc-9mpzp Mar 11 18:49:25.407: INFO: Got endpoints: latency-svc-4czcf [748.872962ms] Mar 11 18:49:25.412: INFO: Created: latency-svc-f6lqj Mar 11 18:49:25.458: INFO: Got endpoints: latency-svc-rjzng [750.234263ms] Mar 11 18:49:25.463: INFO: Created: latency-svc-dxwd6 Mar 11 18:49:25.507: INFO: Got endpoints: latency-svc-shxbz [749.721641ms] Mar 11 18:49:25.512: INFO: Created: latency-svc-qz5rb Mar 11 18:49:25.557: INFO: Got endpoints: latency-svc-m6cwl [749.614487ms] Mar 11 18:49:25.563: INFO: Created: latency-svc-p2lvc Mar 11 18:49:25.607: INFO: Got endpoints: latency-svc-xgtpg [749.092706ms] Mar 11 18:49:25.613: INFO: Created: latency-svc-dgx6j Mar 11 18:49:25.658: INFO: Got endpoints: latency-svc-gjrrb [750.732591ms] Mar 11 18:49:25.665: INFO: Created: latency-svc-5xght Mar 11 18:49:25.707: INFO: Got endpoints: latency-svc-hzgs2 [749.750294ms] Mar 11 18:49:25.712: INFO: Created: latency-svc-xpczf Mar 11 18:49:25.757: INFO: Got endpoints: latency-svc-6n55g [750.439236ms] Mar 11 18:49:25.763: INFO: Created: latency-svc-lpfg4 Mar 11 18:49:25.807: INFO: Got endpoints: latency-svc-qcpj5 [749.857342ms] Mar 11 18:49:25.812: INFO: Created: latency-svc-gfm4x Mar 11 18:49:25.858: INFO: Got endpoints: latency-svc-tjtkc [750.305223ms] Mar 11 18:49:25.864: INFO: Created: latency-svc-7gqwt Mar 11 18:49:25.907: INFO: Got endpoints: latency-svc-htdcz [749.733581ms] Mar 11 18:49:25.914: INFO: Created: latency-svc-bqkfn Mar 11 18:49:25.957: INFO: Got endpoints: latency-svc-66l7j [749.259326ms] Mar 11 18:49:25.962: INFO: Created: latency-svc-jqmrh Mar 11 18:49:26.007: INFO: Got endpoints: latency-svc-4cljm [750.472014ms] Mar 11 18:49:26.013: INFO: Created: latency-svc-hqd2p Mar 11 18:49:26.057: INFO: Got endpoints: latency-svc-cztz4 [749.912813ms] Mar 11 18:49:26.062: INFO: Created: latency-svc-gxwwt Mar 11 18:49:26.108: INFO: Got endpoints: latency-svc-9mpzp [750.3994ms] Mar 11 18:49:26.112: INFO: Created: latency-svc-7s5mx Mar 11 18:49:26.157: INFO: Got endpoints: latency-svc-f6lqj [750.062197ms] Mar 11 18:49:26.163: INFO: Created: latency-svc-4vdf6 Mar 11 18:49:26.207: INFO: Got endpoints: latency-svc-dxwd6 [749.623394ms] Mar 11 18:49:26.215: INFO: Created: latency-svc-b5sc6 Mar 11 18:49:26.257: INFO: Got endpoints: latency-svc-qz5rb [750.203882ms] Mar 11 18:49:26.262: INFO: Created: latency-svc-6zpqx Mar 11 18:49:26.308: INFO: Got endpoints: latency-svc-p2lvc [750.490046ms] Mar 11 18:49:26.313: INFO: Created: latency-svc-ffj27 Mar 11 18:49:26.357: INFO: Got endpoints: latency-svc-dgx6j [750.353592ms] Mar 11 18:49:26.363: INFO: Created: latency-svc-5nlvf Mar 11 18:49:26.407: INFO: Got endpoints: latency-svc-5xght [748.770935ms] Mar 11 18:49:26.412: INFO: Created: latency-svc-k8wp9 Mar 11 18:49:26.457: INFO: Got endpoints: latency-svc-xpczf [750.040175ms] Mar 11 18:49:26.463: INFO: Created: latency-svc-c7kjx Mar 11 18:49:26.507: INFO: Got endpoints: latency-svc-lpfg4 [749.919822ms] Mar 11 18:49:26.514: INFO: Created: latency-svc-5g9c2 Mar 11 18:49:26.557: INFO: Got endpoints: latency-svc-gfm4x [749.883656ms] Mar 11 18:49:26.562: INFO: Created: latency-svc-cfxl6 Mar 11 18:49:26.607: INFO: Got endpoints: latency-svc-7gqwt [749.42159ms] Mar 11 18:49:26.612: INFO: Created: latency-svc-2gdnn Mar 11 18:49:26.657: INFO: Got endpoints: latency-svc-bqkfn [749.971026ms] Mar 11 18:49:26.663: INFO: Created: latency-svc-9vsff Mar 11 18:49:26.707: INFO: Got endpoints: latency-svc-jqmrh [750.16696ms] Mar 11 18:49:26.712: INFO: Created: latency-svc-6r9s4 Mar 11 18:49:26.758: INFO: Got endpoints: latency-svc-hqd2p [750.060932ms] Mar 11 18:49:26.763: INFO: Created: latency-svc-jwxw7 Mar 11 18:49:26.807: INFO: Got endpoints: latency-svc-gxwwt [749.93638ms] Mar 11 18:49:26.813: INFO: Created: latency-svc-ztxnc Mar 11 18:49:26.857: INFO: Got endpoints: latency-svc-7s5mx [749.633844ms] Mar 11 18:49:26.862: INFO: Created: latency-svc-vptvm Mar 11 18:49:26.907: INFO: Got endpoints: latency-svc-4vdf6 [750.023936ms] Mar 11 18:49:26.912: INFO: Created: latency-svc-sdh62 Mar 11 18:49:26.957: INFO: Got endpoints: latency-svc-b5sc6 [749.611383ms] Mar 11 18:49:26.964: INFO: Created: latency-svc-p6dmb Mar 11 18:49:27.007: INFO: Got endpoints: latency-svc-6zpqx [750.166917ms] Mar 11 18:49:27.013: INFO: Created: latency-svc-r7whr Mar 11 18:49:27.057: INFO: Got endpoints: latency-svc-ffj27 [749.337353ms] Mar 11 18:49:27.062: INFO: Created: latency-svc-mt59c Mar 11 18:49:27.107: INFO: Got endpoints: latency-svc-5nlvf [749.522716ms] Mar 11 18:49:27.112: INFO: Created: latency-svc-g9zdg Mar 11 18:49:27.158: INFO: Got endpoints: latency-svc-k8wp9 [750.746218ms] Mar 11 18:49:27.163: INFO: Created: latency-svc-6gcb9 Mar 11 18:49:27.207: INFO: Got endpoints: latency-svc-c7kjx [750.000662ms] Mar 11 18:49:27.213: INFO: Created: latency-svc-jkxfl Mar 11 18:49:27.257: INFO: Got endpoints: latency-svc-5g9c2 [749.5988ms] Mar 11 18:49:27.263: INFO: Created: latency-svc-mmrrw Mar 11 18:49:27.307: INFO: Got endpoints: latency-svc-cfxl6 [749.787244ms] Mar 11 18:49:27.312: INFO: Created: latency-svc-c6wxt Mar 11 18:49:27.358: INFO: Got endpoints: latency-svc-2gdnn [750.432294ms] Mar 11 18:49:27.363: INFO: Created: latency-svc-42vbm Mar 11 18:49:27.407: INFO: Got endpoints: latency-svc-9vsff [749.551907ms] Mar 11 18:49:27.414: INFO: Created: latency-svc-9rgp5 Mar 11 18:49:27.458: INFO: Got endpoints: latency-svc-6r9s4 [750.281742ms] Mar 11 18:49:27.463: INFO: Created: latency-svc-xl9dd Mar 11 18:49:27.507: INFO: Got endpoints: latency-svc-jwxw7 [749.752273ms] Mar 11 18:49:27.514: INFO: Created: latency-svc-f7j62 Mar 11 18:49:27.557: INFO: Got endpoints: latency-svc-ztxnc [749.829384ms] Mar 11 18:49:27.563: INFO: Created: latency-svc-hxqz4 Mar 11 18:49:27.607: INFO: Got endpoints: latency-svc-vptvm [750.190944ms] Mar 11 18:49:27.613: INFO: Created: latency-svc-sk6rh Mar 11 18:49:27.657: INFO: Got endpoints: latency-svc-sdh62 [750.576169ms] Mar 11 18:49:27.663: INFO: Created: latency-svc-smqqx Mar 11 18:49:27.707: INFO: Got endpoints: latency-svc-p6dmb [749.735986ms] Mar 11 18:49:27.713: INFO: Created: latency-svc-q2ptt Mar 11 18:49:27.757: INFO: Got endpoints: latency-svc-r7whr [749.78081ms] Mar 11 18:49:27.762: INFO: Created: latency-svc-dg895 Mar 11 18:49:27.807: INFO: Got endpoints: latency-svc-mt59c [750.388205ms] Mar 11 18:49:27.813: INFO: Created: latency-svc-fkvt5 Mar 11 18:49:27.857: INFO: Got endpoints: latency-svc-g9zdg [750.359228ms] Mar 11 18:49:27.863: INFO: Created: latency-svc-2mfz4 Mar 11 18:49:27.908: INFO: Got endpoints: latency-svc-6gcb9 [750.159635ms] Mar 11 18:49:27.914: INFO: Created: latency-svc-7rwht Mar 11 18:49:27.958: INFO: Got endpoints: latency-svc-jkxfl [750.18336ms] Mar 11 18:49:27.963: INFO: Created: latency-svc-ktgzq Mar 11 18:49:28.007: INFO: Got endpoints: latency-svc-mmrrw [750.317765ms] Mar 11 18:49:28.013: INFO: Created: latency-svc-njsc2 Mar 11 18:49:28.058: INFO: Got endpoints: latency-svc-c6wxt [750.366764ms] Mar 11 18:49:28.063: INFO: Created: latency-svc-gx59b Mar 11 18:49:28.107: INFO: Got endpoints: latency-svc-42vbm [749.431687ms] Mar 11 18:49:28.113: INFO: Created: latency-svc-ssz6r Mar 11 18:49:28.157: INFO: Got endpoints: latency-svc-9rgp5 [750.276887ms] Mar 11 18:49:28.163: INFO: Created: latency-svc-ql5rx Mar 11 18:49:28.208: INFO: Got endpoints: latency-svc-xl9dd [750.198737ms] Mar 11 18:49:28.213: INFO: Created: latency-svc-jh7zt Mar 11 18:49:28.257: INFO: Got endpoints: latency-svc-f7j62 [749.573719ms] Mar 11 18:49:28.263: INFO: Created: latency-svc-wl7fr Mar 11 18:49:28.308: INFO: Got endpoints: latency-svc-hxqz4 [750.897991ms] Mar 11 18:49:28.314: INFO: Created: latency-svc-s5srm Mar 11 18:49:28.357: INFO: Got endpoints: latency-svc-sk6rh [749.69508ms] Mar 11 18:49:28.362: INFO: Created: latency-svc-m4g5s Mar 11 18:49:28.408: INFO: Got endpoints: latency-svc-smqqx [749.989749ms] Mar 11 18:49:28.413: INFO: Created: latency-svc-h92jl Mar 11 18:49:28.458: INFO: Got endpoints: latency-svc-q2ptt [750.853937ms] Mar 11 18:49:28.466: INFO: Created: latency-svc-slfl8 Mar 11 18:49:28.508: INFO: Got endpoints: latency-svc-dg895 [750.520933ms] Mar 11 18:49:28.513: INFO: Created: latency-svc-bf942 Mar 11 18:49:28.557: INFO: Got endpoints: latency-svc-fkvt5 [749.580903ms] Mar 11 18:49:28.562: INFO: Created: latency-svc-cbxzk Mar 11 18:49:28.608: INFO: Got endpoints: latency-svc-2mfz4 [750.122952ms] Mar 11 18:49:28.613: INFO: Created: latency-svc-djcc7 Mar 11 18:49:28.657: INFO: Got endpoints: latency-svc-7rwht [748.969362ms] Mar 11 18:49:28.662: INFO: Created: latency-svc-92c2p Mar 11 18:49:28.707: INFO: Got endpoints: latency-svc-ktgzq [749.570193ms] Mar 11 18:49:28.712: INFO: Created: latency-svc-c5mxz Mar 11 18:49:28.757: INFO: Got endpoints: latency-svc-njsc2 [750.070535ms] Mar 11 18:49:28.763: INFO: Created: latency-svc-ggr7z Mar 11 18:49:28.807: INFO: Got endpoints: latency-svc-gx59b [749.778033ms] Mar 11 18:49:28.813: INFO: Created: latency-svc-m5m7w Mar 11 18:49:28.857: INFO: Got endpoints: latency-svc-ssz6r [750.165075ms] Mar 11 18:49:28.863: INFO: Created: latency-svc-hjmln Mar 11 18:49:28.907: INFO: Got endpoints: latency-svc-ql5rx [749.592445ms] Mar 11 18:49:28.913: INFO: Created: latency-svc-gw5nh Mar 11 18:49:28.957: INFO: Got endpoints: latency-svc-jh7zt [749.44108ms] Mar 11 18:49:28.962: INFO: Created: latency-svc-vffkd Mar 11 18:49:29.009: INFO: Got endpoints: latency-svc-wl7fr [751.723533ms] Mar 11 18:49:29.014: INFO: Created: latency-svc-vfgpv Mar 11 18:49:29.057: INFO: Got endpoints: latency-svc-s5srm [749.298726ms] Mar 11 18:49:29.063: INFO: Created: latency-svc-g8hss Mar 11 18:49:29.108: INFO: Got endpoints: latency-svc-m4g5s [750.712873ms] Mar 11 18:49:29.113: INFO: Created: latency-svc-jwtjx Mar 11 18:49:29.158: INFO: Got endpoints: latency-svc-h92jl [749.977241ms] Mar 11 18:49:29.163: INFO: Created: latency-svc-7z9g2 Mar 11 18:49:29.207: INFO: Got endpoints: latency-svc-slfl8 [749.117307ms] Mar 11 18:49:29.213: INFO: Created: latency-svc-2blx7 Mar 11 18:49:29.257: INFO: Got endpoints: latency-svc-bf942 [749.289722ms] Mar 11 18:49:29.262: INFO: Created: latency-svc-mhvfq Mar 11 18:49:29.308: INFO: Got endpoints: latency-svc-cbxzk [750.927791ms] Mar 11 18:49:29.313: INFO: Created: latency-svc-8ll4t Mar 11 18:49:29.357: INFO: Got endpoints: latency-svc-djcc7 [749.464015ms] Mar 11 18:49:29.363: INFO: Created: latency-svc-4v7mk Mar 11 18:49:29.407: INFO: Got endpoints: latency-svc-92c2p [750.203019ms] Mar 11 18:49:29.413: INFO: Created: latency-svc-g46sk Mar 11 18:49:29.458: INFO: Got endpoints: latency-svc-c5mxz [750.785333ms] Mar 11 18:49:29.464: INFO: Created: latency-svc-dmj6v Mar 11 18:49:29.507: INFO: Got endpoints: latency-svc-ggr7z [749.735466ms] Mar 11 18:49:29.513: INFO: Created: latency-svc-qldg2 Mar 11 18:49:29.557: INFO: Got endpoints: latency-svc-m5m7w [749.736853ms] Mar 11 18:49:29.562: INFO: Created: latency-svc-4pjpp Mar 11 18:49:29.608: INFO: Got endpoints: latency-svc-hjmln [750.410588ms] Mar 11 18:49:29.614: INFO: Created: latency-svc-np2wn Mar 11 18:49:29.657: INFO: Got endpoints: latency-svc-gw5nh [750.203327ms] Mar 11 18:49:29.663: INFO: Created: latency-svc-sj6h9 Mar 11 18:49:29.708: INFO: Got endpoints: latency-svc-vffkd [750.773721ms] Mar 11 18:49:29.714: INFO: Created: latency-svc-lm7cj Mar 11 18:49:29.757: INFO: Got endpoints: latency-svc-vfgpv [748.648674ms] Mar 11 18:49:29.763: INFO: Created: latency-svc-qf8m2 Mar 11 18:49:29.808: INFO: Got endpoints: latency-svc-g8hss [750.406914ms] Mar 11 18:49:29.813: INFO: Created: latency-svc-lj4wf Mar 11 18:49:29.857: INFO: Got endpoints: latency-svc-jwtjx [749.168273ms] Mar 11 18:49:29.862: INFO: Created: latency-svc-5pnbf Mar 11 18:49:29.907: INFO: Got endpoints: latency-svc-7z9g2 [749.587671ms] Mar 11 18:49:29.913: INFO: Created: latency-svc-fjnlz Mar 11 18:49:29.957: INFO: Got endpoints: latency-svc-2blx7 [750.390512ms] Mar 11 18:49:29.963: INFO: Created: latency-svc-lnn45 Mar 11 18:49:30.008: INFO: Got endpoints: latency-svc-mhvfq [750.500091ms] Mar 11 18:49:30.013: INFO: Created: latency-svc-jc7xq Mar 11 18:49:30.057: INFO: Got endpoints: latency-svc-8ll4t [749.293312ms] Mar 11 18:49:30.064: INFO: Created: latency-svc-vnvjj Mar 11 18:49:30.107: INFO: Got endpoints: latency-svc-4v7mk [750.275435ms] Mar 11 18:49:30.113: INFO: Created: latency-svc-bzlgc Mar 11 18:49:30.157: INFO: Got endpoints: latency-svc-g46sk [749.940926ms] Mar 11 18:49:30.164: INFO: Created: latency-svc-n6gxd Mar 11 18:49:30.207: INFO: Got endpoints: latency-svc-dmj6v [748.905234ms] Mar 11 18:49:30.213: INFO: Created: latency-svc-kvl48 Mar 11 18:49:30.257: INFO: Got endpoints: latency-svc-qldg2 [750.26833ms] Mar 11 18:49:30.263: INFO: Created: latency-svc-v68mk Mar 11 18:49:30.308: INFO: Got endpoints: latency-svc-4pjpp [750.450111ms] Mar 11 18:49:30.313: INFO: Created: latency-svc-cmjf4 Mar 11 18:49:30.358: INFO: Got endpoints: latency-svc-np2wn [749.846492ms] Mar 11 18:49:30.362: INFO: Created: latency-svc-2x5dc Mar 11 18:49:30.408: INFO: Got endpoints: latency-svc-sj6h9 [750.286875ms] Mar 11 18:49:30.413: INFO: Created: latency-svc-p856c Mar 11 18:49:30.458: INFO: Got endpoints: latency-svc-lm7cj [749.508509ms] Mar 11 18:49:30.463: INFO: Created: latency-svc-dl29b Mar 11 18:49:30.507: INFO: Got endpoints: latency-svc-qf8m2 [749.920467ms] Mar 11 18:49:30.513: INFO: Created: latency-svc-8cz5k Mar 11 18:49:30.558: INFO: Got endpoints: latency-svc-lj4wf [750.30759ms] Mar 11 18:49:30.564: INFO: Created: latency-svc-6gk9n Mar 11 18:49:30.608: INFO: Got endpoints: latency-svc-5pnbf [750.424161ms] Mar 11 18:49:30.614: INFO: Created: latency-svc-stqxl Mar 11 18:49:30.657: INFO: Got endpoints: latency-svc-fjnlz [749.880693ms] Mar 11 18:49:30.663: INFO: Created: latency-svc-ncfgw Mar 11 18:49:30.707: INFO: Got endpoints: latency-svc-lnn45 [750.107289ms] Mar 11 18:49:30.714: INFO: Created: latency-svc-hg2px Mar 11 18:49:30.757: INFO: Got endpoints: latency-svc-jc7xq [748.979318ms] Mar 11 18:49:30.762: INFO: Created: latency-svc-9grpd Mar 11 18:49:30.807: INFO: Got endpoints: latency-svc-vnvjj [749.876196ms] Mar 11 18:49:30.813: INFO: Created: latency-svc-m4gdt Mar 11 18:49:30.858: INFO: Got endpoints: latency-svc-bzlgc [750.125798ms] Mar 11 18:49:30.864: INFO: Created: latency-svc-7h9hg Mar 11 18:49:30.907: INFO: Got endpoints: latency-svc-n6gxd [749.820259ms] Mar 11 18:49:30.913: INFO: Created: latency-svc-8987g Mar 11 18:49:30.957: INFO: Got endpoints: latency-svc-kvl48 [750.446169ms] Mar 11 18:49:30.963: INFO: Created: latency-svc-wmkwc Mar 11 18:49:31.008: INFO: Got endpoints: latency-svc-v68mk [750.128452ms] Mar 11 18:49:31.057: INFO: Got endpoints: latency-svc-cmjf4 [749.588575ms] Mar 11 18:49:31.107: INFO: Got endpoints: latency-svc-2x5dc [749.86957ms] Mar 11 18:49:31.158: INFO: Got endpoints: latency-svc-p856c [750.082759ms] Mar 11 18:49:31.208: INFO: Got endpoints: latency-svc-dl29b [750.193583ms] Mar 11 18:49:31.258: INFO: Got endpoints: latency-svc-8cz5k [750.139426ms] Mar 11 18:49:31.307: INFO: Got endpoints: latency-svc-6gk9n [749.337681ms] Mar 11 18:49:31.357: INFO: Got endpoints: latency-svc-stqxl [749.719862ms] Mar 11 18:49:31.407: INFO: Got endpoints: latency-svc-ncfgw [750.325984ms] Mar 11 18:49:31.457: INFO: Got endpoints: latency-svc-hg2px [749.967227ms] Mar 11 18:49:31.508: INFO: Got endpoints: latency-svc-9grpd [751.212546ms] Mar 11 18:49:31.558: INFO: Got endpoints: latency-svc-m4gdt [750.390021ms] Mar 11 18:49:31.607: INFO: Got endpoints: latency-svc-7h9hg [749.689302ms] Mar 11 18:49:31.658: INFO: Got endpoints: latency-svc-8987g [750.299903ms] Mar 11 18:49:31.708: INFO: Got endpoints: latency-svc-wmkwc [750.118377ms] Mar 11 18:49:31.708: INFO: Latencies: [7.762411ms 10.844015ms 13.249329ms 15.773499ms 19.318785ms 21.603323ms 24.859706ms 27.4645ms 30.39723ms 32.587266ms 35.961447ms 38.612168ms 41.132003ms 42.136225ms 42.514223ms 42.625314ms 42.733149ms 42.822296ms 42.885686ms 43.349491ms 43.471753ms 43.498662ms 43.548625ms 43.813987ms 43.854947ms 44.008646ms 44.070404ms 44.304561ms 44.516734ms 44.849898ms 45.679386ms 92.786564ms 139.51433ms 185.131085ms 234.730779ms 279.765811ms 328.0598ms 375.296808ms 422.461289ms 470.013422ms 516.132163ms 563.773878ms 611.383607ms 658.551793ms 705.307755ms 747.950332ms 748.648674ms 748.770935ms 748.872962ms 748.905234ms 748.969362ms 748.979318ms 749.092706ms 749.117307ms 749.134423ms 749.168273ms 749.259326ms 749.289722ms 749.293312ms 749.298726ms 749.337353ms 749.337681ms 749.39521ms 749.42159ms 749.431687ms 749.44108ms 749.464015ms 749.508509ms 749.522716ms 749.551907ms 749.567354ms 749.570193ms 749.572809ms 749.573719ms 749.580903ms 749.581729ms 749.587671ms 749.588575ms 749.592445ms 749.5988ms 749.611383ms 749.614487ms 749.623394ms 749.633844ms 749.654854ms 749.689302ms 749.69508ms 749.719862ms 749.721641ms 749.730037ms 749.733581ms 749.735466ms 749.735986ms 749.736853ms 749.750294ms 749.752273ms 749.775932ms 749.778033ms 749.78081ms 749.787244ms 749.820259ms 749.829384ms 749.846492ms 749.857342ms 749.86957ms 749.876196ms 749.880693ms 749.883656ms 749.912813ms 749.919822ms 749.920467ms 749.93638ms 749.940926ms 749.967227ms 749.971026ms 749.974318ms 749.977241ms 749.989749ms 750.000662ms 750.022777ms 750.023936ms 750.027318ms 750.037045ms 750.040175ms 750.060932ms 750.062197ms 750.070535ms 750.082759ms 750.105613ms 750.107289ms 750.118377ms 750.122952ms 750.125798ms 750.128452ms 750.139426ms 750.157918ms 750.159635ms 750.165075ms 750.166917ms 750.16696ms 750.18336ms 750.190944ms 750.193583ms 750.198737ms 750.201238ms 750.203019ms 750.203327ms 750.203882ms 750.234263ms 750.26833ms 750.275435ms 750.276887ms 750.281742ms 750.286875ms 750.293092ms 750.299559ms 750.299903ms 750.305223ms 750.30759ms 750.317765ms 750.325984ms 750.353592ms 750.359228ms 750.366764ms 750.388205ms 750.390021ms 750.390356ms 750.390512ms 750.391268ms 750.3994ms 750.406914ms 750.407859ms 750.410588ms 750.424161ms 750.432294ms 750.439236ms 750.446169ms 750.450111ms 750.472014ms 750.490046ms 750.493176ms 750.500091ms 750.513272ms 750.520933ms 750.576169ms 750.712013ms 750.712873ms 750.732591ms 750.746218ms 750.773721ms 750.785333ms 750.826256ms 750.853937ms 750.896004ms 750.897991ms 750.927791ms 751.084696ms 751.212546ms 751.723533ms 752.533948ms] Mar 11 18:49:31.708: INFO: 50 %ile: 749.820259ms Mar 11 18:49:31.708: INFO: 90 %ile: 750.493176ms Mar 11 18:49:31.708: INFO: 99 %ile: 751.723533ms Mar 11 18:49:31.708: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:49:31.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5733" for this suite. • [SLOW TEST:12.867 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":39,"skipped":556,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:49:31.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-9483 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-744s STEP: Creating a pod to test atomic-volume-subpath Mar 11 18:49:31.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-744s" in namespace "subpath-9483" to be "Succeeded or Failed" Mar 11 18:49:31.866: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.945605ms Mar 11 18:49:33.871: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007744413s Mar 11 18:49:35.876: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 4.01253157s Mar 11 18:49:37.879: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 6.015995285s Mar 11 18:49:39.884: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 8.02073274s Mar 11 18:49:41.889: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 10.025823508s Mar 11 18:49:43.894: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 12.030611101s Mar 11 18:49:45.899: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 14.035539749s Mar 11 18:49:47.904: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 16.041076224s Mar 11 18:49:49.909: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 18.046083835s Mar 11 18:49:51.915: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 20.051539966s Mar 11 18:49:53.918: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Running", Reason="", readiness=true. Elapsed: 22.055177554s Mar 11 18:49:55.923: INFO: Pod "pod-subpath-test-downwardapi-744s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060400999s STEP: Saw pod success Mar 11 18:49:55.923: INFO: Pod "pod-subpath-test-downwardapi-744s" satisfied condition "Succeeded or Failed" Mar 11 18:49:55.926: INFO: Trying to get logs from node node1 pod pod-subpath-test-downwardapi-744s container test-container-subpath-downwardapi-744s: STEP: delete the pod Mar 11 18:49:55.944: INFO: Waiting for pod pod-subpath-test-downwardapi-744s to disappear Mar 11 18:49:55.946: INFO: Pod pod-subpath-test-downwardapi-744s no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-744s Mar 11 18:49:55.946: INFO: Deleting pod "pod-subpath-test-downwardapi-744s" in namespace "subpath-9483" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:49:55.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9483" for this suite. • [SLOW TEST:24.235 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":40,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:49:55.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2323 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2323 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2323 to expose endpoints map[] Mar 11 18:49:56.088: INFO: Get endpoints failed (2.396055ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 11 18:49:57.091: INFO: successfully validated that service endpoint-test2 in namespace services-2323 exposes endpoints map[] (1.005543401s elapsed) STEP: Creating pod pod1 in namespace services-2323 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2323 to expose endpoints map[pod1:[80]] Mar 11 18:50:00.127: INFO: successfully validated that service endpoint-test2 in namespace services-2323 exposes endpoints map[pod1:[80]] (3.023477815s elapsed) STEP: Creating pod pod2 in namespace services-2323 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2323 to expose endpoints map[pod1:[80] pod2:[80]] Mar 11 18:50:03.173: INFO: successfully validated that service endpoint-test2 in namespace services-2323 exposes endpoints map[pod1:[80] pod2:[80]] (3.033694276s elapsed) STEP: Deleting pod pod1 in namespace services-2323 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2323 to expose endpoints map[pod2:[80]] Mar 11 18:50:04.190: INFO: successfully validated that service endpoint-test2 in namespace services-2323 exposes endpoints map[pod2:[80]] (1.011157439s elapsed) STEP: Deleting pod pod2 in namespace services-2323 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2323 to expose endpoints map[] Mar 11 18:50:04.196: INFO: successfully validated that service endpoint-test2 in namespace services-2323 exposes endpoints map[] (2.303021ms elapsed) [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:50:04.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2323" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.259 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":41,"skipped":578,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:50:04.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-5803 STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fb7lh in namespace proxy-5803 I0311 18:50:04.348731 12 runners.go:190] Created replication controller with name: proxy-service-fb7lh, namespace: proxy-5803, replica count: 1 I0311 18:50:05.400725 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 18:50:06.401068 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 18:50:07.401426 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:08.402676 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:09.403860 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:10.405107 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:11.407771 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:12.410431 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 18:50:13.410818 12 runners.go:190] proxy-service-fb7lh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 18:50:13.413: INFO: setup took 9.074532704s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.370765ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 3.680665ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.566809ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 3.957179ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.707958ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.928264ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.624042ms) Mar 11 18:50:13.417: INFO: (0) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.795738ms) Mar 11 18:50:13.419: INFO: (0) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 6.137463ms) Mar 11 18:50:13.419: INFO: (0) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 6.481515ms) Mar 11 18:50:13.420: INFO: (0) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 6.297809ms) Mar 11 18:50:13.422: INFO: (0) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 8.412984ms) Mar 11 18:50:13.422: INFO: (0) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 8.337344ms) Mar 11 18:50:13.422: INFO: (0) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 8.710995ms) Mar 11 18:50:13.422: INFO: (0) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 9.106156ms) Mar 11 18:50:13.422: INFO: (0) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 2.601321ms) Mar 11 18:50:13.425: INFO: (1) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.720181ms) Mar 11 18:50:13.425: INFO: (1) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 3.064227ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.146278ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.973302ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.788745ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.69824ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.73542ms) Mar 11 18:50:13.426: INFO: (1) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.835202ms) Mar 11 18:50:13.427: INFO: (1) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.827902ms) Mar 11 18:50:13.427: INFO: (1) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.801063ms) Mar 11 18:50:13.427: INFO: (1) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.802409ms) Mar 11 18:50:13.429: INFO: (2) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.543182ms) Mar 11 18:50:13.429: INFO: (2) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.502164ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.788144ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.208267ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 3.323368ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.637989ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.733058ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.687907ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.674593ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 3.644188ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.68827ms) Mar 11 18:50:13.430: INFO: (2) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.827893ms) Mar 11 18:50:13.431: INFO: (2) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.047083ms) Mar 11 18:50:13.431: INFO: (2) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 4.144369ms) Mar 11 18:50:13.431: INFO: (2) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.053118ms) Mar 11 18:50:13.435: INFO: (3) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 4.274246ms) Mar 11 18:50:13.436: INFO: (3) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 4.385532ms) Mar 11 18:50:13.436: INFO: (3) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 4.306184ms) Mar 11 18:50:13.436: INFO: (3) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 4.568853ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 4.712286ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 4.682263ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 4.884474ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 5.104924ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 5.13177ms) Mar 11 18:50:13.437: INFO: (3) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 5.588136ms) Mar 11 18:50:13.438: INFO: (3) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 5.554882ms) Mar 11 18:50:13.440: INFO: (4) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.344205ms) Mar 11 18:50:13.440: INFO: (4) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.180271ms) Mar 11 18:50:13.440: INFO: (4) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.483128ms) Mar 11 18:50:13.440: INFO: (4) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.785323ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.681881ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.713796ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.755244ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.242281ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.120267ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.052771ms) Mar 11 18:50:13.441: INFO: (4) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.262424ms) Mar 11 18:50:13.442: INFO: (4) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.532391ms) Mar 11 18:50:13.442: INFO: (4) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.667006ms) Mar 11 18:50:13.442: INFO: (4) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.602495ms) Mar 11 18:50:13.442: INFO: (4) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.612993ms) Mar 11 18:50:13.444: INFO: (5) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.252135ms) Mar 11 18:50:13.444: INFO: (5) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.348544ms) Mar 11 18:50:13.444: INFO: (5) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.699977ms) Mar 11 18:50:13.444: INFO: (5) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.533292ms) Mar 11 18:50:13.444: INFO: (5) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.754365ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.740736ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 2.930635ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.896886ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.151727ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.252117ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.086321ms) Mar 11 18:50:13.445: INFO: (5) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.423553ms) Mar 11 18:50:13.446: INFO: (5) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.656906ms) Mar 11 18:50:13.446: INFO: (5) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.884155ms) Mar 11 18:50:13.446: INFO: (5) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.338765ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.37266ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.430366ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.607895ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.874286ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.131535ms) Mar 11 18:50:13.449: INFO: (6) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.853628ms) Mar 11 18:50:13.450: INFO: (6) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.947481ms) Mar 11 18:50:13.450: INFO: (6) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 1.805931ms) Mar 11 18:50:13.453: INFO: (7) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.127884ms) Mar 11 18:50:13.453: INFO: (7) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 1.934609ms) Mar 11 18:50:13.453: INFO: (7) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.367776ms) Mar 11 18:50:13.453: INFO: (7) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.361685ms) Mar 11 18:50:13.453: INFO: (7) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.775688ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.930456ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.756395ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.018221ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.268104ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.320184ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.501553ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.574459ms) Mar 11 18:50:13.454: INFO: (7) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.712129ms) Mar 11 18:50:13.457: INFO: (8) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.194283ms) Mar 11 18:50:13.457: INFO: (8) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.466788ms) Mar 11 18:50:13.457: INFO: (8) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 2.689358ms) Mar 11 18:50:13.457: INFO: (8) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.594669ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.102696ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.106267ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.243846ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.196695ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.472454ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.563379ms) Mar 11 18:50:13.458: INFO: (8) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 3.642518ms) Mar 11 18:50:13.459: INFO: (8) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 4.130245ms) Mar 11 18:50:13.459: INFO: (8) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 4.116429ms) Mar 11 18:50:13.459: INFO: (8) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.390662ms) Mar 11 18:50:13.459: INFO: (8) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.426029ms) Mar 11 18:50:13.461: INFO: (9) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.013935ms) Mar 11 18:50:13.462: INFO: (9) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 2.283144ms) Mar 11 18:50:13.462: INFO: (9) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.608924ms) Mar 11 18:50:13.462: INFO: (9) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.600154ms) Mar 11 18:50:13.462: INFO: (9) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.617989ms) Mar 11 18:50:13.463: INFO: (9) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.774186ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.639314ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 4.659197ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 4.870245ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 4.711728ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 4.875289ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 4.747313ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 4.981254ms) Mar 11 18:50:13.464: INFO: (9) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 5.129603ms) Mar 11 18:50:13.465: INFO: (9) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 5.433392ms) Mar 11 18:50:13.467: INFO: (10) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.152249ms) Mar 11 18:50:13.467: INFO: (10) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.38684ms) Mar 11 18:50:13.467: INFO: (10) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.459492ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.634193ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.884569ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.046486ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.215705ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 3.62516ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.649836ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.440453ms) Mar 11 18:50:13.468: INFO: (10) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.585294ms) Mar 11 18:50:13.469: INFO: (10) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.636538ms) Mar 11 18:50:13.469: INFO: (10) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.579914ms) Mar 11 18:50:13.469: INFO: (10) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.025182ms) Mar 11 18:50:13.469: INFO: (10) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.326255ms) Mar 11 18:50:13.471: INFO: (11) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 1.972164ms) Mar 11 18:50:13.471: INFO: (11) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.091243ms) Mar 11 18:50:13.471: INFO: (11) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.118577ms) Mar 11 18:50:13.472: INFO: (11) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.50592ms) Mar 11 18:50:13.472: INFO: (11) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 2.411472ms) Mar 11 18:50:13.472: INFO: (11) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.69068ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.004843ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.96827ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.124674ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.159715ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.475499ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.7238ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.845648ms) Mar 11 18:50:13.473: INFO: (11) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.877944ms) Mar 11 18:50:13.474: INFO: (11) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.065462ms) Mar 11 18:50:13.476: INFO: (12) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.442536ms) Mar 11 18:50:13.476: INFO: (12) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.620249ms) Mar 11 18:50:13.476: INFO: (12) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 2.725036ms) Mar 11 18:50:13.476: INFO: (12) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.478968ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.781909ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.920318ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 2.981377ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.969952ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.365972ms) Mar 11 18:50:13.477: INFO: (12) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 3.570724ms) Mar 11 18:50:13.478: INFO: (12) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.764681ms) Mar 11 18:50:13.478: INFO: (12) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.783795ms) Mar 11 18:50:13.478: INFO: (12) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 4.016008ms) Mar 11 18:50:13.478: INFO: (12) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.20998ms) Mar 11 18:50:13.478: INFO: (12) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 4.433192ms) Mar 11 18:50:13.480: INFO: (13) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.025746ms) Mar 11 18:50:13.481: INFO: (13) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: ... (200; 2.68752ms) Mar 11 18:50:13.481: INFO: (13) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.757297ms) Mar 11 18:50:13.481: INFO: (13) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.842636ms) Mar 11 18:50:13.481: INFO: (13) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.845931ms) Mar 11 18:50:13.481: INFO: (13) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 2.9237ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.137346ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.192396ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.215392ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.550035ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.620555ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.539954ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.439919ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.597508ms) Mar 11 18:50:13.482: INFO: (13) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.905107ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.367126ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.18583ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.220931ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.197081ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 2.488066ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.636179ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.86049ms) Mar 11 18:50:13.485: INFO: (14) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.948742ms) Mar 11 18:50:13.486: INFO: (14) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.173788ms) Mar 11 18:50:13.486: INFO: (14) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.15891ms) Mar 11 18:50:13.486: INFO: (14) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.682674ms) Mar 11 18:50:13.486: INFO: (14) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 2.360294ms) Mar 11 18:50:13.489: INFO: (15) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.235592ms) Mar 11 18:50:13.489: INFO: (15) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.403551ms) Mar 11 18:50:13.489: INFO: (15) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.745449ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.886064ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.930809ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.927561ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.225713ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.336789ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.322145ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.598408ms) Mar 11 18:50:13.490: INFO: (15) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.730502ms) Mar 11 18:50:13.491: INFO: (15) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.776616ms) Mar 11 18:50:13.491: INFO: (15) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.0838ms) Mar 11 18:50:13.491: INFO: (15) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 4.174865ms) Mar 11 18:50:13.493: INFO: (16) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.291976ms) Mar 11 18:50:13.493: INFO: (16) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.194313ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.471542ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.589741ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.512457ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.724093ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.896049ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.045451ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.126897ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 3.314352ms) Mar 11 18:50:13.494: INFO: (16) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.386086ms) Mar 11 18:50:13.495: INFO: (16) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.48461ms) Mar 11 18:50:13.495: INFO: (16) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.660771ms) Mar 11 18:50:13.495: INFO: (16) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.778235ms) Mar 11 18:50:13.495: INFO: (16) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.911026ms) Mar 11 18:50:13.497: INFO: (17) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.205882ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test (200; 2.878718ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.082144ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.988233ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.004506ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 3.121868ms) Mar 11 18:50:13.498: INFO: (17) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.104661ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.35537ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 3.407188ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 3.580524ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 4.130172ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 4.118388ms) Mar 11 18:50:13.499: INFO: (17) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.987843ms) Mar 11 18:50:13.500: INFO: (17) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 4.437537ms) Mar 11 18:50:13.500: INFO: (17) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 4.357096ms) Mar 11 18:50:13.502: INFO: (18) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.042731ms) Mar 11 18:50:13.502: INFO: (18) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 2.051027ms) Mar 11 18:50:13.502: INFO: (18) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.137596ms) Mar 11 18:50:13.502: INFO: (18) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.391185ms) Mar 11 18:50:13.503: INFO: (18) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 2.59578ms) Mar 11 18:50:13.503: INFO: (18) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.874979ms) Mar 11 18:50:13.503: INFO: (18) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.185902ms) Mar 11 18:50:13.503: INFO: (18) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: test<... (200; 3.350957ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.493822ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname1/proxy/: foo (200; 3.60989ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname1/proxy/: tls baz (200; 3.731356ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.72865ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.959741ms) Mar 11 18:50:13.504: INFO: (18) /api/v1/namespaces/proxy-5803/services/https:proxy-service-fb7lh:tlsportname2/proxy/: tls qux (200; 4.466737ms) Mar 11 18:50:13.507: INFO: (19) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 2.000206ms) Mar 11 18:50:13.507: INFO: (19) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:1080/proxy/: test<... (200; 2.139506ms) Mar 11 18:50:13.507: INFO: (19) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:1080/proxy/: ... (200; 2.691443ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/services/http:proxy-service-fb7lh:portname2/proxy/: bar (200; 3.019803ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b/proxy/: test (200; 3.136775ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname2/proxy/: bar (200; 3.094293ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/http:proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 2.999902ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:162/proxy/: bar (200; 3.033235ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/services/proxy-service-fb7lh:portname1/proxy/: foo (200; 3.307286ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:462/proxy/: tls qux (200; 3.473374ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:460/proxy/: tls baz (200; 3.56177ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/proxy-service-fb7lh-zj98b:160/proxy/: foo (200; 3.498688ms) Mar 11 18:50:13.508: INFO: (19) /api/v1/namespaces/proxy-5803/pods/https:proxy-service-fb7lh-zj98b:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3688 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-fd0156d3-1c2c-4bf6-aaf7-f67ee407ebe0 STEP: Creating a pod to test consume configMaps Mar 11 18:50:26.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2" in namespace "configmap-3688" to be "Succeeded or Failed" Mar 11 18:50:26.716: INFO: Pod "pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875288ms Mar 11 18:50:28.721: INFO: Pod "pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007289562s Mar 11 18:50:30.725: INFO: Pod "pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011955876s STEP: Saw pod success Mar 11 18:50:30.726: INFO: Pod "pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2" satisfied condition "Succeeded or Failed" Mar 11 18:50:30.727: INFO: Trying to get logs from node node2 pod pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2 container configmap-volume-test: STEP: delete the pod Mar 11 18:50:30.746: INFO: Waiting for pod pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2 to disappear Mar 11 18:50:30.747: INFO: Pod pod-configmaps-a9fdde9e-b1a8-49aa-9602-54945959a5e2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:50:30.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3688" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":683,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:50:30.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9491 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-2285ae9c-0d67-4187-97c1-5cfb5a0c4a38 STEP: Creating a pod to test consume secrets Mar 11 18:50:30.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e" in namespace "projected-9491" to be "Succeeded or Failed" Mar 11 18:50:30.894: INFO: Pod "pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54479ms Mar 11 18:50:32.897: INFO: Pod "pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006075523s Mar 11 18:50:34.903: INFO: Pod "pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011507958s STEP: Saw pod success Mar 11 18:50:34.903: INFO: Pod "pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e" satisfied condition "Succeeded or Failed" Mar 11 18:50:34.906: INFO: Trying to get logs from node node1 pod pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e container secret-volume-test: STEP: delete the pod Mar 11 18:50:34.919: INFO: Waiting for pod pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e to disappear Mar 11 18:50:34.921: INFO: Pod pod-projected-secrets-0c988f8c-ca8b-406d-8863-e79343d0574e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:50:34.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9491" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":689,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:50:34.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9064 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9064 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9064 STEP: creating replication controller externalsvc in namespace services-9064 I0311 18:50:35.069733 12 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9064, replica count: 2 I0311 18:50:38.120380 12 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 11 18:50:38.136: INFO: Creating new exec pod Mar 11 18:50:42.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9064 execpodnwftc -- /bin/sh -x -c nslookup nodeport-service' Mar 11 18:50:42.424: INFO: stderr: "+ nslookup nodeport-service\n" Mar 11 18:50:42.424: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-9064.svc.cluster.local\tcanonical name = externalsvc.services-9064.svc.cluster.local.\nName:\texternalsvc.services-9064.svc.cluster.local\nAddress: 10.233.57.142\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9064, will wait for the garbage collector to delete the pods Mar 11 18:50:42.483: INFO: Deleting ReplicationController externalsvc took: 5.424289ms Mar 11 18:50:42.583: INFO: Terminating ReplicationController externalsvc pods took: 100.40062ms Mar 11 18:50:47.294: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:50:47.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9064" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.381 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":45,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:50:47.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9732 STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 11 18:50:47.448: INFO: Waiting up to 5m0s for pod "pod-4a09f71c-056e-42d2-ba00-5b09672e682b" in namespace "emptydir-9732" to be "Succeeded or Failed" Mar 11 18:50:47.451: INFO: Pod "pod-4a09f71c-056e-42d2-ba00-5b09672e682b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054094ms Mar 11 18:50:49.456: INFO: Pod "pod-4a09f71c-056e-42d2-ba00-5b09672e682b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007865089s Mar 11 18:50:51.460: INFO: Pod "pod-4a09f71c-056e-42d2-ba00-5b09672e682b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011730132s STEP: Saw pod success Mar 11 18:50:51.460: INFO: Pod "pod-4a09f71c-056e-42d2-ba00-5b09672e682b" satisfied condition "Succeeded or Failed" Mar 11 18:50:51.463: INFO: Trying to get logs from node node2 pod pod-4a09f71c-056e-42d2-ba00-5b09672e682b container test-container: STEP: delete the pod Mar 11 18:50:51.478: INFO: Waiting for pod pod-4a09f71c-056e-42d2-ba00-5b09672e682b to disappear Mar 11 18:50:51.480: INFO: Pod pod-4a09f71c-056e-42d2-ba00-5b09672e682b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:50:51.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":714,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:50:51.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-3132 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-fa6e8806-bf8e-4764-8607-d43f382279d1 in namespace container-probe-3132 Mar 11 18:50:55.627: INFO: Started pod liveness-fa6e8806-bf8e-4764-8607-d43f382279d1 in namespace container-probe-3132 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 18:50:55.629: INFO: Initial restart count of pod liveness-fa6e8806-bf8e-4764-8607-d43f382279d1 is 0 Mar 11 18:51:13.670: INFO: Restart count of pod container-probe-3132/liveness-fa6e8806-bf8e-4764-8607-d43f382279d1 is now 1 (18.040917636s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:51:13.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3132" for this suite. • [SLOW TEST:22.197 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:51:13.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-84 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 11 18:51:13.808: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 18:51:13.821: INFO: Waiting for terminating namespaces to be deleted... Mar 11 18:51:13.823: INFO: Logging pods the kubelet thinks is on node node1 before test Mar 11 18:51:13.836: INFO: kube-flannel-8pz9c from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.836: INFO: Container kube-flannel ready: true, restart count 2 Mar 11 18:51:13.836: INFO: cmk-init-discover-node2-29mrv from kube-system started at 2021-03-11 18:03:13 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.836: INFO: Container discover ready: false, restart count 0 Mar 11 18:51:13.836: INFO: Container init ready: false, restart count 0 Mar 11 18:51:13.836: INFO: Container install ready: false, restart count 0 Mar 11 18:51:13.836: INFO: cmk-webhook-888945845-2gpfq from kube-system started at 2021-03-11 18:03:34 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.836: INFO: Container cmk-webhook ready: true, restart count 0 Mar 11 18:51:13.836: INFO: node-exporter-mw629 from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.836: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:51:13.836: INFO: Container node-exporter ready: true, restart count 0 Mar 11 18:51:13.836: INFO: kube-proxy-5zz5g from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.836: INFO: Container kube-proxy ready: true, restart count 2 Mar 11 18:51:13.836: INFO: collectd-4rvsd from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.836: INFO: Container collectd ready: true, restart count 0 Mar 11 18:51:13.836: INFO: Container collectd-exporter ready: true, restart count 0 Mar 11 18:51:13.836: INFO: Container rbac-proxy ready: true, restart count 0 Mar 11 18:51:13.837: INFO: cmk-s6v97 from kube-system started at 2021-03-11 18:03:34 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.837: INFO: Container nodereport ready: true, restart count 0 Mar 11 18:51:13.837: INFO: Container reconcile ready: true, restart count 0 Mar 11 18:51:13.837: INFO: nginx-proxy-node1 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.837: INFO: Container nginx-proxy ready: true, restart count 2 Mar 11 18:51:13.837: INFO: node-feature-discovery-worker-nf56t from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.837: INFO: Container nfd-worker ready: true, restart count 0 Mar 11 18:51:13.837: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.837: INFO: Container kube-sriovdp ready: true, restart count 0 Mar 11 18:51:13.837: INFO: prometheus-k8s-0 from monitoring started at 2021-03-11 18:04:37 +0000 UTC (5 container statuses recorded) Mar 11 18:51:13.837: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Mar 11 18:51:13.837: INFO: Container grafana ready: true, restart count 0 Mar 11 18:51:13.837: INFO: Container prometheus ready: true, restart count 1 Mar 11 18:51:13.837: INFO: Container prometheus-config-reloader ready: true, restart count 0 Mar 11 18:51:13.837: INFO: Container rules-configmap-reloader ready: true, restart count 0 Mar 11 18:51:13.837: INFO: kube-multus-ds-amd64-gtmmz from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.837: INFO: Container kube-multus ready: true, restart count 1 Mar 11 18:51:13.837: INFO: Logging pods the kubelet thinks is on node node2 before test Mar 11 18:51:13.851: INFO: kubernetes-dashboard-57777fbdcb-zsnff from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kubernetes-dashboard ready: true, restart count 1 Mar 11 18:51:13.851: INFO: node-feature-discovery-worker-8xdg7 from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container nfd-worker ready: true, restart count 0 Mar 11 18:51:13.851: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-sriovdp ready: true, restart count 0 Mar 11 18:51:13.851: INFO: cmk-init-discover-node2-9knwq from kube-system started at 2021-03-11 18:02:23 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.851: INFO: Container discover ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container init ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container install ready: false, restart count 0 Mar 11 18:51:13.851: INFO: kube-multus-ds-amd64-rpm89 from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-multus ready: true, restart count 1 Mar 11 18:51:13.851: INFO: cmk-init-discover-node1-vk7wm from kube-system started at 2021-03-11 18:01:40 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.851: INFO: Container discover ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container init ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container install ready: false, restart count 0 Mar 11 18:51:13.851: INFO: prometheus-operator-f66f5fb4d-f2pkm from monitoring started at 2021-03-11 18:04:21 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container prometheus-operator ready: true, restart count 0 Mar 11 18:51:13.851: INFO: cmk-init-discover-node2-qbc6m from kube-system started at 2021-03-11 18:02:53 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.851: INFO: Container discover ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container init ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container install ready: false, restart count 0 Mar 11 18:51:13.851: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz from monitoring started at 2021-03-11 18:07:22 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.851: INFO: Container tas-controller ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container tas-extender ready: true, restart count 0 Mar 11 18:51:13.851: INFO: nginx-proxy-node2 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container nginx-proxy ready: true, restart count 2 Mar 11 18:51:13.851: INFO: kube-proxy-znx8n from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-proxy ready: true, restart count 1 Mar 11 18:51:13.851: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Mar 11 18:51:13.851: INFO: cmk-init-discover-node2-c5j6h from kube-system started at 2021-03-11 18:02:02 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.851: INFO: Container discover ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container init ready: false, restart count 0 Mar 11 18:51:13.851: INFO: Container install ready: false, restart count 0 Mar 11 18:51:13.851: INFO: kube-flannel-8wwvj from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-flannel ready: true, restart count 2 Mar 11 18:51:13.851: INFO: cmk-slzjv from kube-system started at 2021-03-11 18:03:33 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.851: INFO: Container nodereport ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container reconcile ready: true, restart count 0 Mar 11 18:51:13.851: INFO: node-exporter-x6vqx from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded) Mar 11 18:51:13.851: INFO: Container kube-rbac-proxy ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container node-exporter ready: true, restart count 0 Mar 11 18:51:13.851: INFO: collectd-86ww6 from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded) Mar 11 18:51:13.851: INFO: Container collectd ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container collectd-exporter ready: true, restart count 0 Mar 11 18:51:13.851: INFO: Container rbac-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-14853851-92f2-42d4-a950-fc5a0b8fc40d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-14853851-92f2-42d4-a950-fc5a0b8fc40d off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-14853851-92f2-42d4-a950-fc5a0b8fc40d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:51:29.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-84" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":48,"skipped":765,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:51:29.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-7298 STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 11 18:51:30.285: INFO: Pod name wrapped-volume-race-05411295-32ab-41b1-b015-7d9b13af3040: Found 2 pods out of 5 Mar 11 18:51:35.292: INFO: Pod name wrapped-volume-race-05411295-32ab-41b1-b015-7d9b13af3040: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05411295-32ab-41b1-b015-7d9b13af3040 in namespace emptydir-wrapper-7298, will wait for the garbage collector to delete the pods Mar 11 18:51:49.374: INFO: Deleting ReplicationController wrapped-volume-race-05411295-32ab-41b1-b015-7d9b13af3040 took: 6.669907ms Mar 11 18:51:49.974: INFO: Terminating ReplicationController wrapped-volume-race-05411295-32ab-41b1-b015-7d9b13af3040 pods took: 600.603894ms STEP: Creating RC which spawns configmap-volume pods Mar 11 18:52:06.491: INFO: Pod name wrapped-volume-race-8a9dec7f-5f8e-4f3e-8636-23627e722a76: Found 0 pods out of 5 Mar 11 18:52:11.505: INFO: Pod name wrapped-volume-race-8a9dec7f-5f8e-4f3e-8636-23627e722a76: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8a9dec7f-5f8e-4f3e-8636-23627e722a76 in namespace emptydir-wrapper-7298, will wait for the garbage collector to delete the pods Mar 11 18:52:27.591: INFO: Deleting ReplicationController wrapped-volume-race-8a9dec7f-5f8e-4f3e-8636-23627e722a76 took: 6.617457ms Mar 11 18:52:28.191: INFO: Terminating ReplicationController wrapped-volume-race-8a9dec7f-5f8e-4f3e-8636-23627e722a76 pods took: 600.463587ms STEP: Creating RC which spawns configmap-volume pods Mar 11 18:52:36.508: INFO: Pod name wrapped-volume-race-6d179913-b6e7-4493-8e44-6155456c640c: Found 0 pods out of 5 Mar 11 18:52:41.515: INFO: Pod name wrapped-volume-race-6d179913-b6e7-4493-8e44-6155456c640c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6d179913-b6e7-4493-8e44-6155456c640c in namespace emptydir-wrapper-7298, will wait for the garbage collector to delete the pods Mar 11 18:52:55.599: INFO: Deleting ReplicationController wrapped-volume-race-6d179913-b6e7-4493-8e44-6155456c640c took: 5.234721ms Mar 11 18:52:56.200: INFO: Terminating ReplicationController wrapped-volume-race-6d179913-b6e7-4493-8e44-6155456c640c pods took: 600.523039ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:06.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7298" for this suite. • [SLOW TEST:96.864 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":49,"skipped":773,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:06.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9469 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:06.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9469" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":50,"skipped":795,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:06.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-979 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-979" for this suite. • [SLOW TEST:6.163 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:13.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-6119 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 11 18:53:13.686: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 11 18:53:15.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085593, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085593, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085593, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085593, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 11 18:53:18.710: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:18.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6119" for this suite. STEP: Destroying namespace "webhook-6119-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.656 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":52,"skipped":840,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:18.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7713 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1450 STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-1056 STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:34.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7713" for this suite. STEP: Destroying namespace "nsdeletetest-1450" for this suite. Mar 11 18:53:34.180: INFO: Namespace nsdeletetest-1450 was already deleted STEP: Destroying namespace "nsdeletetest-1056" for this suite. • [SLOW TEST:15.404 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":53,"skipped":844,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:34.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3844 STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 11 18:53:34.829: INFO: created pod pod-service-account-defaultsa Mar 11 18:53:34.829: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 11 18:53:34.838: INFO: created pod pod-service-account-mountsa Mar 11 18:53:34.838: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 11 18:53:34.847: INFO: created pod pod-service-account-nomountsa Mar 11 18:53:34.847: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 11 18:53:34.856: INFO: created pod pod-service-account-defaultsa-mountspec Mar 11 18:53:34.856: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 11 18:53:34.867: INFO: created pod pod-service-account-mountsa-mountspec Mar 11 18:53:34.867: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 11 18:53:34.877: INFO: created pod pod-service-account-nomountsa-mountspec Mar 11 18:53:34.877: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 11 18:53:34.886: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 11 18:53:34.886: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 11 18:53:34.896: INFO: created pod pod-service-account-mountsa-nomountspec Mar 11 18:53:34.896: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 11 18:53:34.905: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 11 18:53:34.905: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:34.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3844" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":54,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:34.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8265 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 11 18:53:35.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d" in namespace "downward-api-8265" to be "Succeeded or Failed" Mar 11 18:53:35.048: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534855ms Mar 11 18:53:37.055: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010064281s Mar 11 18:53:39.061: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015536499s Mar 11 18:53:41.066: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020328737s Mar 11 18:53:43.070: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024961399s Mar 11 18:53:45.074: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028283978s Mar 11 18:53:47.076: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.03118217s STEP: Saw pod success Mar 11 18:53:47.077: INFO: Pod "downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d" satisfied condition "Succeeded or Failed" Mar 11 18:53:47.079: INFO: Trying to get logs from node node2 pod downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d container client-container: STEP: delete the pod Mar 11 18:53:47.099: INFO: Waiting for pod downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d to disappear Mar 11 18:53:47.102: INFO: Pod downwardapi-volume-7ba50381-d5c1-409f-8661-29b42703bb6d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:53:47.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8265" for this suite. • [SLOW TEST:12.198 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:53:47.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-4518 STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4518 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 18:53:47.235: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 11 18:53:47.269: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 11 18:53:49.276: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 11 18:53:51.276: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 11 18:53:53.274: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 18:53:55.276: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 18:53:57.275: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 18:53:59.274: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 18:54:01.273: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 11 18:54:03.274: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 11 18:54:03.280: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 11 18:54:05.283: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 11 18:54:07.283: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 11 18:54:09.283: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 11 18:54:13.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.73:8080/dial?request=hostname&protocol=http&host=10.244.3.72&port=8080&tries=1'] Namespace:pod-network-test-4518 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 18:54:13.307: INFO: >>> kubeConfig: /root/.kube/config Mar 11 18:54:13.420: INFO: Waiting for responses: map[] Mar 11 18:54:13.422: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.73:8080/dial?request=hostname&protocol=http&host=10.244.4.73&port=8080&tries=1'] Namespace:pod-network-test-4518 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 18:54:13.422: INFO: >>> kubeConfig: /root/.kube/config Mar 11 18:54:13.525: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:54:13.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4518" for this suite. • [SLOW TEST:26.421 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":937,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:54:13.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5643 STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-9f47f370-b7b3-4ee0-8104-deaea3d86c91 STEP: Creating a pod to test consume secrets Mar 11 18:54:13.671: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9" in namespace "projected-5643" to be "Succeeded or Failed" Mar 11 18:54:13.673: INFO: Pod "pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.9574ms Mar 11 18:54:15.677: INFO: Pod "pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006045344s Mar 11 18:54:17.683: INFO: Pod "pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011638978s STEP: Saw pod success Mar 11 18:54:17.683: INFO: Pod "pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9" satisfied condition "Succeeded or Failed" Mar 11 18:54:17.686: INFO: Trying to get logs from node node2 pod pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9 container projected-secret-volume-test: STEP: delete the pod Mar 11 18:54:17.700: INFO: Waiting for pod pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9 to disappear Mar 11 18:54:17.702: INFO: Pod pod-projected-secrets-79d966a1-a9aa-4430-ac2d-d17fda1f12d9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:54:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5643" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":939,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:54:17.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8785 STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:54:33.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8785" for this suite. • [SLOW TEST:16.171 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":58,"skipped":945,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:54:33.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1494 STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:54:34.040: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fa006c59-54f6-4e51-9159-1374af60e04c", Controller:(*bool)(0xc0017cdb0a), BlockOwnerDeletion:(*bool)(0xc0017cdb0b)}} Mar 11 18:54:34.043: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"24eea6c2-4e60-4d08-abb6-c2383862767b", Controller:(*bool)(0xc00059a48a), BlockOwnerDeletion:(*bool)(0xc00059a48b)}} Mar 11 18:54:34.046: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"539572d5-e9a0-4fda-a0f2-3f82f36fdd93", Controller:(*bool)(0xc00059af6a), BlockOwnerDeletion:(*bool)(0xc00059af6b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:54:39.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1494" for this suite. • [SLOW TEST:5.182 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":59,"skipped":959,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:54:39.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2883 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 11 18:54:39.185: INFO: PodSpec: initContainers in spec.initContainers Mar 11 18:55:34.870: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-31d420bb-18b2-4740-ba20-d931241c69e6", GenerateName:"", Namespace:"init-container-2883", SelfLink:"/api/v1/namespaces/init-container-2883/pods/pod-init-31d420bb-18b2-4740-ba20-d931241c69e6", UID:"93098e69-7d9c-4e60-b252-e67b58473ff8", ResourceVersion:"22762", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751085679, loc:(*time.Location)(0x7b4c620)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"185743055"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.75\"\n ],\n \"mac\": \"be:87:3c:77:13:a3\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.75\"\n ],\n \"mac\": \"be:87:3c:77:13:a3\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c94200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c94280)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c942e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c94320)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c94360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c94380)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lflz6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001044040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lflz6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lflz6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lflz6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0035a80c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c4c070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035a8150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0035a8180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0035a8188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0035a818c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085679, loc:(*time.Location)(0x7b4c620)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085679, loc:(*time.Location)(0x7b4c620)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085679, loc:(*time.Location)(0x7b4c620)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085679, loc:(*time.Location)(0x7b4c620)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.75", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.75"}}, StartTime:(*v1.Time)(0xc002c943a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c4c1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c4c2a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://f097f63294595bc834b4928bea21a68fda988ce91f5aee47fdef44c5a22a8711", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c94420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c943e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0035a823f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:55:34.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2883" for this suite. • [SLOW TEST:55.816 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":60,"skipped":966,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:55:34.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8185 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 11 18:55:35.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab" in namespace "downward-api-8185" to be "Succeeded or Failed" Mar 11 18:55:35.017: INFO: Pod "downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485365ms Mar 11 18:55:37.022: INFO: Pod "downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007453184s Mar 11 18:55:39.025: INFO: Pod "downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010844431s STEP: Saw pod success Mar 11 18:55:39.025: INFO: Pod "downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab" satisfied condition "Succeeded or Failed" Mar 11 18:55:39.028: INFO: Trying to get logs from node node2 pod downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab container client-container: STEP: delete the pod Mar 11 18:55:39.042: INFO: Waiting for pod downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab to disappear Mar 11 18:55:39.043: INFO: Pod downwardapi-volume-1f2e5171-f573-481b-827c-6fbd39f219ab no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:55:39.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8185" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":969,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:55:39.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8542 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-78dd2085-5fd1-4ece-83ec-d46c97a36178 in namespace container-probe-8542 Mar 11 18:55:43.197: INFO: Started pod busybox-78dd2085-5fd1-4ece-83ec-d46c97a36178 in namespace container-probe-8542 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 18:55:43.199: INFO: Initial restart count of pod busybox-78dd2085-5fd1-4ece-83ec-d46c97a36178 is 0 Mar 11 18:56:37.312: INFO: Restart count of pod container-probe-8542/busybox-78dd2085-5fd1-4ece-83ec-d46c97a36178 is now 1 (54.112690896s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:56:37.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8542" for this suite. • [SLOW TEST:58.277 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":969,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:56:37.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-3880 STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 11 18:56:53.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3880" for this suite. • [SLOW TEST:16.216 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":63,"skipped":971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 11 18:56:53.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-868 STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 11 18:56:53.681: INFO: (0) /api/v1/nodes/node2/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4791
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Mar 11 18:56:53.888: INFO: Waiting up to 5m0s for pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59" in namespace "var-expansion-4791" to be "Succeeded or Failed"
Mar 11 18:56:53.890: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070672ms
Mar 11 18:56:55.894: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005937426s
Mar 11 18:56:57.897: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008977507s
Mar 11 18:56:59.900: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012468045s
Mar 11 18:57:01.904: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015947102s
STEP: Saw pod success
Mar 11 18:57:01.904: INFO: Pod "var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59" satisfied condition "Succeeded or Failed"
Mar 11 18:57:01.906: INFO: Trying to get logs from node node1 pod var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59 container dapi-container: 
STEP: delete the pod
Mar 11 18:57:01.927: INFO: Waiting for pod var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59 to disappear
Mar 11 18:57:01.931: INFO: Pod var-expansion-f3169d58-cfa6-4449-af52-299e42bd8d59 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:01.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4791" for this suite.

• [SLOW TEST:8.183 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":999,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:01.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5919
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-6e1c14d4-3cd7-48e6-8e42-f135050a3a85
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:02.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5919" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":66,"skipped":1008,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:02.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-789
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-b09de567-9b7c-44b4-8065-3b4c6abe6f14
STEP: Creating a pod to test consume secrets
Mar 11 18:57:02.208: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b" in namespace "projected-789" to be "Succeeded or Failed"
Mar 11 18:57:02.211: INFO: Pod "pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.323209ms
Mar 11 18:57:04.215: INFO: Pod "pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00738891s
Mar 11 18:57:06.220: INFO: Pod "pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012568228s
STEP: Saw pod success
Mar 11 18:57:06.220: INFO: Pod "pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b" satisfied condition "Succeeded or Failed"
Mar 11 18:57:06.223: INFO: Trying to get logs from node node1 pod pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b container projected-secret-volume-test: 
STEP: delete the pod
Mar 11 18:57:06.237: INFO: Waiting for pod pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b to disappear
Mar 11 18:57:06.240: INFO: Pod pod-projected-secrets-2aa49a73-eb89-4a3d-8364-799ed2b8fa1b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:06.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-789" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:06.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9280
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Mar 11 18:57:12.399: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9280 PodName:pod-sharedvolume-3cabdca5-5dba-4185-be25-e1eacc47f7bc ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 18:57:12.399: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 18:57:12.521: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:12.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9280" for this suite.

• [SLOW TEST:6.282 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":68,"skipped":1057,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:12.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7642
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:57:12.654: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:18.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7642" for this suite.

• [SLOW TEST:6.148 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":69,"skipped":1067,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:18.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-4191
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:22.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4191" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1071,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:22.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8853
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 11 18:57:29.493: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7b2a1b41-e1f1-4875-a896-40fb40208907"
Mar 11 18:57:29.493: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7b2a1b41-e1f1-4875-a896-40fb40208907" in namespace "pods-8853" to be "terminated due to deadline exceeded"
Mar 11 18:57:29.495: INFO: Pod "pod-update-activedeadlineseconds-7b2a1b41-e1f1-4875-a896-40fb40208907": Phase="Running", Reason="", readiness=true. Elapsed: 2.215937ms
Mar 11 18:57:31.500: INFO: Pod "pod-update-activedeadlineseconds-7b2a1b41-e1f1-4875-a896-40fb40208907": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00774194s
Mar 11 18:57:31.500: INFO: Pod "pod-update-activedeadlineseconds-7b2a1b41-e1f1-4875-a896-40fb40208907" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:31.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8853" for this suite.

• [SLOW TEST:8.672 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1078,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:31.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4055
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4055 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4055;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4055 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4055;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4055.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4055.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4055.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4055.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4055.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4055.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4055.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4055.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4055.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.27_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4055 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4055;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4055 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4055;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4055.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4055.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4055.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4055.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4055.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4055.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4055.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4055.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4055.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4055.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 27.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.27_udp@PTR;check="$$(dig +tcp +noall +answer +search 27.62.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.62.27_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 18:57:35.666: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.670: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.672: INFO: Unable to read wheezy_udp@dns-test-service.dns-4055 from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.675: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4055 from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.683: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.708: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.710: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.714: INFO: Unable to read jessie_udp@dns-test-service.dns-4055 from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.717: INFO: Unable to read jessie_tcp@dns-test-service.dns-4055 from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.721: INFO: Unable to read jessie_udp@dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.724: INFO: Unable to read jessie_tcp@dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.727: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4055.svc from pod dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695: the server could not find the requested resource (get pods dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695)
Mar 11 18:57:35.747: INFO: Lookups using dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4055 wheezy_tcp@dns-test-service.dns-4055 wheezy_udp@dns-test-service.dns-4055.svc wheezy_tcp@dns-test-service.dns-4055.svc wheezy_udp@_http._tcp.dns-test-service.dns-4055.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4055.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4055 jessie_tcp@dns-test-service.dns-4055 jessie_udp@dns-test-service.dns-4055.svc jessie_tcp@dns-test-service.dns-4055.svc jessie_udp@_http._tcp.dns-test-service.dns-4055.svc jessie_tcp@_http._tcp.dns-test-service.dns-4055.svc]

Mar 11 18:57:40.831: INFO: DNS probes using dns-4055/dns-test-4ccf7abb-81e6-4510-80c0-3c6eba884695 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:40.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4055" for this suite.

• [SLOW TEST:9.356 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":72,"skipped":1093,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:40.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-582
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-582
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-582
I0311 18:57:41.006911      12 runners.go:190] Created replication controller with name: externalname-service, namespace: services-582, replica count: 2
I0311 18:57:44.061449      12 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 11 18:57:44.061: INFO: Creating new exec pod
Mar 11 18:57:49.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-582 execpodgcfjq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Mar 11 18:57:49.421: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Mar 11 18:57:49.421: INFO: stdout: ""
Mar 11 18:57:49.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-582 execpodgcfjq -- /bin/sh -x -c nc -zv -t -w 2 10.233.22.74 80'
Mar 11 18:57:49.669: INFO: stderr: "+ nc -zv -t -w 2 10.233.22.74 80\nConnection to 10.233.22.74 80 port [tcp/http] succeeded!\n"
Mar 11 18:57:49.669: INFO: stdout: ""
Mar 11 18:57:49.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-582 execpodgcfjq -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 32755'
Mar 11 18:57:49.933: INFO: stderr: "+ nc -zv -t -w 2 10.10.190.207 32755\nConnection to 10.10.190.207 32755 port [tcp/32755] succeeded!\n"
Mar 11 18:57:49.933: INFO: stdout: ""
Mar 11 18:57:49.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-582 execpodgcfjq -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.208 32755'
Mar 11 18:57:50.196: INFO: stderr: "+ nc -zv -t -w 2 10.10.190.208 32755\nConnection to 10.10.190.208 32755 port [tcp/32755] succeeded!\n"
Mar 11 18:57:50.196: INFO: stdout: ""
Mar 11 18:57:50.196: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:57:50.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-582" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.350 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":73,"skipped":1098,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:57:50.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6126
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:01.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6126" for this suite.

• [SLOW TEST:11.158 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":74,"skipped":1108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:01.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-9640
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0311 18:58:02.528649      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 18:58:02.528: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:02.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9640" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":75,"skipped":1156,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:02.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1110
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 18:58:02.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 18:58:04.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085882, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085882, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085882, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751085882, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 18:58:07.917: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:07.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1110" for this suite.
STEP: Destroying namespace "webhook-1110-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.417 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":76,"skipped":1186,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:07.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4372
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 11 18:58:08.096: INFO: Waiting up to 5m0s for pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408" in namespace "downward-api-4372" to be "Succeeded or Failed"
Mar 11 18:58:08.098: INFO: Pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408": Phase="Pending", Reason="", readiness=false. Elapsed: 1.938696ms
Mar 11 18:58:10.104: INFO: Pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008082243s
Mar 11 18:58:12.107: INFO: Pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408": Phase="Running", Reason="", readiness=true. Elapsed: 4.011364529s
Mar 11 18:58:14.110: INFO: Pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014213266s
STEP: Saw pod success
Mar 11 18:58:14.110: INFO: Pod "downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408" satisfied condition "Succeeded or Failed"
Mar 11 18:58:14.112: INFO: Trying to get logs from node node2 pod downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408 container dapi-container: 
STEP: delete the pod
Mar 11 18:58:14.124: INFO: Waiting for pod downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408 to disappear
Mar 11 18:58:14.126: INFO: Pod downward-api-0b5dff93-5f3f-48e2-a2e8-dc4922a62408 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:14.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4372" for this suite.

• [SLOW TEST:6.180 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1191,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:14.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7186
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 11 18:58:14.267: INFO: Waiting up to 5m0s for pod "pod-a6880015-6357-41e2-87ed-f56bfb0bce67" in namespace "emptydir-7186" to be "Succeeded or Failed"
Mar 11 18:58:14.269: INFO: Pod "pod-a6880015-6357-41e2-87ed-f56bfb0bce67": Phase="Pending", Reason="", readiness=false. Elapsed: 1.826518ms
Mar 11 18:58:16.273: INFO: Pod "pod-a6880015-6357-41e2-87ed-f56bfb0bce67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005843312s
Mar 11 18:58:18.276: INFO: Pod "pod-a6880015-6357-41e2-87ed-f56bfb0bce67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008689694s
STEP: Saw pod success
Mar 11 18:58:18.276: INFO: Pod "pod-a6880015-6357-41e2-87ed-f56bfb0bce67" satisfied condition "Succeeded or Failed"
Mar 11 18:58:18.278: INFO: Trying to get logs from node node2 pod pod-a6880015-6357-41e2-87ed-f56bfb0bce67 container test-container: 
STEP: delete the pod
Mar 11 18:58:18.291: INFO: Waiting for pod pod-a6880015-6357-41e2-87ed-f56bfb0bce67 to disappear
Mar 11 18:58:18.293: INFO: Pod pod-a6880015-6357-41e2-87ed-f56bfb0bce67 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:18.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7186" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1196,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:18.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-1593
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Mar 11 18:58:18.437: INFO: Waiting up to 5m0s for pod "client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9" in namespace "containers-1593" to be "Succeeded or Failed"
Mar 11 18:58:18.439: INFO: Pod "client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.959609ms
Mar 11 18:58:20.444: INFO: Pod "client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007535606s
Mar 11 18:58:22.450: INFO: Pod "client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012729259s
STEP: Saw pod success
Mar 11 18:58:22.450: INFO: Pod "client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9" satisfied condition "Succeeded or Failed"
Mar 11 18:58:22.452: INFO: Trying to get logs from node node1 pod client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9 container test-container: 
STEP: delete the pod
Mar 11 18:58:22.468: INFO: Waiting for pod client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9 to disappear
Mar 11 18:58:22.470: INFO: Pod client-containers-e29c9a0c-39a7-4656-a2c9-95a40a6a6cf9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:22.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1593" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1198,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:22.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4674
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-fa6a9e4a-3a01-4650-be21-15711c02c728
STEP: Creating a pod to test consume secrets
Mar 11 18:58:22.616: INFO: Waiting up to 5m0s for pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a" in namespace "secrets-4674" to be "Succeeded or Failed"
Mar 11 18:58:22.619: INFO: Pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984974ms
Mar 11 18:58:24.624: INFO: Pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007622611s
Mar 11 18:58:26.627: INFO: Pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010936426s
Mar 11 18:58:28.630: INFO: Pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014214379s
STEP: Saw pod success
Mar 11 18:58:28.630: INFO: Pod "pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a" satisfied condition "Succeeded or Failed"
Mar 11 18:58:28.633: INFO: Trying to get logs from node node1 pod pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a container secret-env-test: 
STEP: delete the pod
Mar 11 18:58:28.647: INFO: Waiting for pod pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a to disappear
Mar 11 18:58:28.649: INFO: Pod pod-secrets-ca4bf042-d6c2-4bd5-b587-c0d912a7258a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:28.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4674" for this suite.

• [SLOW TEST:6.180 seconds]
[sig-api-machinery] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1210,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:28.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6123
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Mar 11 18:58:28.791: INFO: Waiting up to 5m0s for pod "client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178" in namespace "containers-6123" to be "Succeeded or Failed"
Mar 11 18:58:28.794: INFO: Pod "client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17274ms
Mar 11 18:58:30.799: INFO: Pod "client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007051697s
Mar 11 18:58:32.802: INFO: Pod "client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010670065s
STEP: Saw pod success
Mar 11 18:58:32.802: INFO: Pod "client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178" satisfied condition "Succeeded or Failed"
Mar 11 18:58:32.805: INFO: Trying to get logs from node node2 pod client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178 container test-container: 
STEP: delete the pod
Mar 11 18:58:32.818: INFO: Waiting for pod client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178 to disappear
Mar 11 18:58:32.822: INFO: Pod client-containers-ff03557b-6880-4132-92ed-c57ed4a1f178 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:32.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6123" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:32.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-933
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:58:32.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Mar 11 18:58:40.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 create -f -'
Mar 11 18:58:41.241: INFO: stderr: ""
Mar 11 18:58:41.241: INFO: stdout: "e2e-test-crd-publish-openapi-2657-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 11 18:58:41.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 delete e2e-test-crd-publish-openapi-2657-crds test-foo'
Mar 11 18:58:41.392: INFO: stderr: ""
Mar 11 18:58:41.392: INFO: stdout: "e2e-test-crd-publish-openapi-2657-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Mar 11 18:58:41.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 apply -f -'
Mar 11 18:58:41.637: INFO: stderr: ""
Mar 11 18:58:41.637: INFO: stdout: "e2e-test-crd-publish-openapi-2657-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 11 18:58:41.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 delete e2e-test-crd-publish-openapi-2657-crds test-foo'
Mar 11 18:58:41.793: INFO: stderr: ""
Mar 11 18:58:41.793: INFO: stdout: "e2e-test-crd-publish-openapi-2657-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Mar 11 18:58:41.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 create -f -'
Mar 11 18:58:42.001: INFO: rc: 1
Mar 11 18:58:42.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 apply -f -'
Mar 11 18:58:42.199: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Mar 11 18:58:42.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 create -f -'
Mar 11 18:58:42.413: INFO: rc: 1
Mar 11 18:58:42.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-933 apply -f -'
Mar 11 18:58:42.625: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Mar 11 18:58:42.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2657-crds'
Mar 11 18:58:42.877: INFO: stderr: ""
Mar 11 18:58:42.877: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2657-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Mar 11 18:58:42.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2657-crds.metadata'
Mar 11 18:58:43.110: INFO: stderr: ""
Mar 11 18:58:43.111: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2657-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Mar 11 18:58:43.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2657-crds.spec'
Mar 11 18:58:43.364: INFO: stderr: ""
Mar 11 18:58:43.364: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2657-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 11 18:58:43.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2657-crds.spec.bars'
Mar 11 18:58:43.622: INFO: stderr: ""
Mar 11 18:58:43.622: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2657-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 11 18:58:43.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2657-crds.spec.bars2'
Mar 11 18:58:43.846: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:46.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-933" for this suite.

• [SLOW TEST:13.938 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":82,"skipped":1249,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:46.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5114
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:58:46.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5114'
Mar 11 18:58:47.186: INFO: stderr: ""
Mar 11 18:58:47.187: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Mar 11 18:58:47.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5114'
Mar 11 18:58:47.407: INFO: stderr: ""
Mar 11 18:58:47.407: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 11 18:58:48.412: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 18:58:48.412: INFO: Found 0 / 1
Mar 11 18:58:49.412: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 18:58:49.412: INFO: Found 0 / 1
Mar 11 18:58:50.410: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 18:58:50.410: INFO: Found 1 / 1
Mar 11 18:58:50.410: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar 11 18:58:50.412: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 18:58:50.412: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 11 18:58:50.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-vzgnf --namespace=kubectl-5114'
Mar 11 18:58:50.590: INFO: stderr: ""
Mar 11 18:58:50.590: INFO: stdout: "Name:         agnhost-master-vzgnf\nNamespace:    kubectl-5114\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Thu, 11 Mar 2021 18:58:47 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.87\"\n                    ],\n                    \"mac\": \"36:3d:7e:11:c0:b0\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.87\"\n                    ],\n                    \"mac\": \"36:3d:7e:11:c0:b0\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.87\nIPs:\n  IP:           10.244.3.87\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://39322ed5cbd7edf18a696fdf64754d42593a3e5c17b71bdf586b34f9ed078dd0\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 11 Mar 2021 18:58:49 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f8gx2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-f8gx2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-f8gx2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason          Age   From               Message\n  ----    ------          ----  ----               -------\n  Normal  Scheduled       3s    default-scheduler  Successfully assigned kubectl-5114/agnhost-master-vzgnf to node1\n  Normal  AddedInterface  2s    multus             Add eth0 [10.244.3.87/24]\n  Normal  Pulling         2s    kubelet            Pulling image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\"\n  Normal  Pulled          1s    kubelet            Successfully pulled image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\"\n  Normal  Created         1s    kubelet            Created container agnhost-master\n  Normal  Started         1s    kubelet            Started container agnhost-master\n"
Mar 11 18:58:50.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5114'
Mar 11 18:58:50.777: INFO: stderr: ""
Mar 11 18:58:50.777: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5114\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-vzgnf\n"
Mar 11 18:58:50.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5114'
Mar 11 18:58:50.935: INFO: stderr: ""
Mar 11 18:58:50.936: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5114\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.233.25.64\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.3.87:6379\nSession Affinity:  None\nEvents:            \n"
Mar 11 18:58:50.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node master1'
Mar 11 18:58:51.150: INFO: stderr: ""
Mar 11 18:58:51.150: INFO: stdout: "Name:               master1\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=master1\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"0e:0e:ac:80:fe:e5\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 10.10.190.202\n                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 11 Mar 2021 17:50:16 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  master1\n  AcquireTime:     \n  RenewTime:       Thu, 11 Mar 2021 18:58:43 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 11 Mar 2021 17:54:32 +0000   Thu, 11 Mar 2021 17:54:32 +0000   FlannelIsUp                  Flannel is running on this node\n  MemoryPressure       False   Thu, 11 Mar 2021 18:58:44 +0000   Thu, 11 Mar 2021 17:50:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 11 Mar 2021 18:58:44 +0000   Thu, 11 Mar 2021 17:50:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 11 Mar 2021 18:58:44 +0000   Thu, 11 Mar 2021 17:50:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 11 Mar 2021 18:58:44 +0000   Thu, 11 Mar 2021 17:52:41 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  10.10.190.202\n  Hostname:    master1\nCapacity:\n  cpu:                80\n  ephemeral-storage:  439913340Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             196518336Ki\n  pods:               110\nAllocatable:\n  cpu:                79550m\n  ephemeral-storage:  405424133473\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             195665936Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 0cb21bb9b8b64bf38523b2f5a8bdad14\n  System UUID:                00ACFB60-0631-E711-906E-0017A4403562\n  Boot ID:                    4a77cc46-4c80-409c-8c40-c24648f76e32\n  Kernel Version:             3.10.0-1160.15.2.el7.x86_64\n  OS Image:                   CentOS Linux 7 (Core)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://19.3.12\n  Kubelet Version:            v1.18.8\n  Kube-Proxy Version:         v1.18.8\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-59dcc4799b-cp4vq                            100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     65m\n  kube-system                 docker-registry-docker-registry-6d4484d8f9-pkjwp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63m\n  kube-system                 kube-apiserver-master1                              250m (0%)     0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 kube-controller-manager-master1                     200m (0%)     0 (0%)      0 (0%)           0 (0%)         67m\n  kube-system                 kube-flannel-pzw7v                                  150m (0%)     300m (0%)   64M (0%)         500M (0%)      66m\n  kube-system                 kube-multus-ds-amd64-2jdtx                          100m (0%)     100m (0%)   90Mi (0%)        90Mi (0%)      66m\n  kube-system                 kube-proxy-bwz9p                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         67m\n  kube-system                 kube-scheduler-master1                              100m (0%)     0 (0%)      0 (0%)           0 (0%)         51m\n  monitoring                  node-exporter-b54mc                                 112m (0%)     270m (0%)   200Mi (0%)       220Mi (0%)     54m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests       Limits\n  --------           --------       ------\n  cpu                1012m (1%)     670m (0%)\n  memory             431140Ki (0%)  1003316480 (0%)\n  ephemeral-storage  0 (0%)         0 (0%)\n  hugepages-1Gi      0 (0%)         0 (0%)\n  hugepages-2Mi      0 (0%)         0 (0%)\nEvents:\n  Type    Reason                   Age                From     Message\n  ----    ------                   ----               ----     -------\n  Normal  Starting                 60m                kubelet  Starting kubelet.\n  Normal  NodeHasSufficientMemory  60m (x8 over 60m)  kubelet  Node master1 status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    60m (x8 over 60m)  kubelet  Node master1 status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     60m (x7 over 60m)  kubelet  Node master1 status is now: NodeHasSufficientPID\n  Normal  NodeAllocatableEnforced  60m                kubelet  Updated Node Allocatable limit across pods\n"
Mar 11 18:58:51.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5114'
Mar 11 18:58:51.313: INFO: stderr: ""
Mar 11 18:58:51.313: INFO: stdout: "Name:         kubectl-5114\nLabels:       e2e-framework=kubectl\n              e2e-run=37568227-9d5e-4de7-bc87-756a6f76894b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:51.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5114" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":83,"skipped":1257,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:51.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-6417
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:55.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6417" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1269,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:55.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-1117
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:58:55.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1117" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":85,"skipped":1279,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:58:55.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-3938
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:58:55.736: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:01.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3938" for this suite.

• [SLOW TEST:6.238 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":86,"skipped":1286,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:01.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-7840
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-7840/configmap-test-277ff2e2-0b64-45b3-a971-87c44026cd63
STEP: Creating a pod to test consume configMaps
Mar 11 18:59:01.990: INFO: Waiting up to 5m0s for pod "pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47" in namespace "configmap-7840" to be "Succeeded or Failed"
Mar 11 18:59:01.992: INFO: Pod "pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47": Phase="Pending", Reason="", readiness=false. Elapsed: 1.77828ms
Mar 11 18:59:03.995: INFO: Pod "pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005148005s
Mar 11 18:59:05.999: INFO: Pod "pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009531325s
STEP: Saw pod success
Mar 11 18:59:05.999: INFO: Pod "pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47" satisfied condition "Succeeded or Failed"
Mar 11 18:59:06.003: INFO: Trying to get logs from node node2 pod pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47 container env-test: 
STEP: delete the pod
Mar 11 18:59:06.017: INFO: Waiting for pod pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47 to disappear
Mar 11 18:59:06.020: INFO: Pod pod-configmaps-842d39cd-1f46-4582-8547-79a5a7533d47 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7840" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1306,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:06.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1559
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Mar 11 18:59:06.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Mar 11 18:59:06.249: INFO: stderr: ""
Mar 11 18:59:06.249: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:06.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1559" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":88,"skipped":1330,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:06.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-1241
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 11 18:59:14.428: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:14.430: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:16.432: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:16.437: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:18.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:18.436: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:20.436: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:20.440: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:22.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:22.436: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:24.431: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:24.436: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:26.433: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:26.436: INFO: Pod pod-with-prestop-http-hook still exists
Mar 11 18:59:28.436: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 11 18:59:28.438: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:28.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1241" for this suite.

• [SLOW TEST:22.194 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1331,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:28.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-2220
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:59:28.588: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-86d6504d-6b59-42a5-9743-e99aeef2e01d" in namespace "security-context-test-2220" to be "Succeeded or Failed"
Mar 11 18:59:28.591: INFO: Pod "alpine-nnp-false-86d6504d-6b59-42a5-9743-e99aeef2e01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566789ms
Mar 11 18:59:30.594: INFO: Pod "alpine-nnp-false-86d6504d-6b59-42a5-9743-e99aeef2e01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005231671s
Mar 11 18:59:32.597: INFO: Pod "alpine-nnp-false-86d6504d-6b59-42a5-9743-e99aeef2e01d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009105894s
Mar 11 18:59:32.597: INFO: Pod "alpine-nnp-false-86d6504d-6b59-42a5-9743-e99aeef2e01d" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:32.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2220" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1335,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:32.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8483
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 18:59:32.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4" in namespace "projected-8483" to be "Succeeded or Failed"
Mar 11 18:59:32.749: INFO: Pod "downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434137ms
Mar 11 18:59:34.752: INFO: Pod "downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005957735s
Mar 11 18:59:36.755: INFO: Pod "downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009094351s
STEP: Saw pod success
Mar 11 18:59:36.755: INFO: Pod "downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4" satisfied condition "Succeeded or Failed"
Mar 11 18:59:36.758: INFO: Trying to get logs from node node1 pod downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4 container client-container: 
STEP: delete the pod
Mar 11 18:59:36.773: INFO: Waiting for pod downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4 to disappear
Mar 11 18:59:36.774: INFO: Pod downwardapi-volume-119a37d9-6df0-4a82-aa46-4ba99abfd3d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:36.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8483" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1335,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:36.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7200
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-8180
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-2626
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7200" for this suite.
STEP: Destroying namespace "nsdeletetest-8180" for this suite.
Mar 11 18:59:43.175: INFO: Namespace nsdeletetest-8180 was already deleted
STEP: Destroying namespace "nsdeletetest-2626" for this suite.

• [SLOW TEST:6.396 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":92,"skipped":1380,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:43.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-8194
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:59:43.314: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5" in namespace "security-context-test-8194" to be "Succeeded or Failed"
Mar 11 18:59:43.316: INFO: Pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071081ms
Mar 11 18:59:45.323: INFO: Pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009587757s
Mar 11 18:59:47.328: INFO: Pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013844387s
Mar 11 18:59:49.331: INFO: Pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017093766s
Mar 11 18:59:49.331: INFO: Pod "busybox-readonly-false-768e48df-6b25-4619-8fa3-4d68086564b5" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:49.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8194" for this suite.

• [SLOW TEST:6.159 seconds]
[k8s.io] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1405,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:49.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4708
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 18:59:49.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:55.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4708" for this suite.

• [SLOW TEST:6.169 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1445,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:55.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-354
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 11 18:59:55.646: INFO: Waiting up to 5m0s for pod "pod-beb69160-2921-4851-aa86-de2bb5d8bbc9" in namespace "emptydir-354" to be "Succeeded or Failed"
Mar 11 18:59:55.648: INFO: Pod "pod-beb69160-2921-4851-aa86-de2bb5d8bbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.884751ms
Mar 11 18:59:57.651: INFO: Pod "pod-beb69160-2921-4851-aa86-de2bb5d8bbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004727251s
Mar 11 18:59:59.656: INFO: Pod "pod-beb69160-2921-4851-aa86-de2bb5d8bbc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009832625s
STEP: Saw pod success
Mar 11 18:59:59.656: INFO: Pod "pod-beb69160-2921-4851-aa86-de2bb5d8bbc9" satisfied condition "Succeeded or Failed"
Mar 11 18:59:59.658: INFO: Trying to get logs from node node1 pod pod-beb69160-2921-4851-aa86-de2bb5d8bbc9 container test-container: 
STEP: delete the pod
Mar 11 18:59:59.854: INFO: Waiting for pod pod-beb69160-2921-4851-aa86-de2bb5d8bbc9 to disappear
Mar 11 18:59:59.856: INFO: Pod pod-beb69160-2921-4851-aa86-de2bb5d8bbc9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 18:59:59.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-354" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1497,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 18:59:59.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-2353
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:04.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2353" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":96,"skipped":1508,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:04.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-5393
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:09.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5393" for this suite.

• [SLOW TEST:5.163 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":97,"skipped":1521,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:09.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-2454
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 19:00:15.709: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local from pod dns-2454/dns-test-bce0149c-12d4-4783-80a1-306e97257a6b: Get https://10.10.190.202:6443/api/v1/namespaces/dns-2454/pods/dns-test-bce0149c-12d4-4783-80a1-306e97257a6b/proxy/results/wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local: stream error: stream ID 973; INTERNAL_ERROR
Mar 11 19:00:15.713: INFO: Lookups using dns-2454/dns-test-bce0149c-12d4-4783-80a1-306e97257a6b failed for: [wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local]

Mar 11 19:00:20.720: INFO: DNS probes using dns-test-bce0149c-12d4-4783-80a1-306e97257a6b succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 19:00:26.760: INFO: DNS probes using dns-test-35a2e0e1-3a7c-4964-9ceb-53cf0ed95243 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2454.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2454.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 19:00:32.812: INFO: DNS probes using dns-test-1d572069-32ac-42a8-88e3-34f1488462b5 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:32.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2454" for this suite.

• [SLOW TEST:23.284 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":98,"skipped":1549,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:32.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8031
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 11 19:00:32.973: INFO: Waiting up to 5m0s for pod "pod-3ded9a22-28df-4582-bbe2-e828738a5569" in namespace "emptydir-8031" to be "Succeeded or Failed"
Mar 11 19:00:32.976: INFO: Pod "pod-3ded9a22-28df-4582-bbe2-e828738a5569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.558049ms
Mar 11 19:00:34.980: INFO: Pod "pod-3ded9a22-28df-4582-bbe2-e828738a5569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006200953s
Mar 11 19:00:36.983: INFO: Pod "pod-3ded9a22-28df-4582-bbe2-e828738a5569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009614982s
STEP: Saw pod success
Mar 11 19:00:36.983: INFO: Pod "pod-3ded9a22-28df-4582-bbe2-e828738a5569" satisfied condition "Succeeded or Failed"
Mar 11 19:00:36.985: INFO: Trying to get logs from node node2 pod pod-3ded9a22-28df-4582-bbe2-e828738a5569 container test-container: 
STEP: delete the pod
Mar 11 19:00:36.999: INFO: Waiting for pod pod-3ded9a22-28df-4582-bbe2-e828738a5569 to disappear
Mar 11 19:00:37.001: INFO: Pod pod-3ded9a22-28df-4582-bbe2-e828738a5569 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:37.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8031" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1553,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:37.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-8308
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Mar 11 19:00:37.133: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Mar 11 19:00:37.665: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Mar 11 19:00:39.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:00:41.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:00:43.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:00:45.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:00:47.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086037, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:00:51.911: INFO: Waited 2.208034781s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:52.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8308" for this suite.

• [SLOW TEST:15.697 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":100,"skipped":1588,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:52.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6343
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-c55b9253-87fe-4572-8fdb-7d548513388f
STEP: Creating a pod to test consume secrets
Mar 11 19:00:52.846: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1" in namespace "projected-6343" to be "Succeeded or Failed"
Mar 11 19:00:52.850: INFO: Pod "pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.781449ms
Mar 11 19:00:54.853: INFO: Pod "pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006613794s
Mar 11 19:00:56.856: INFO: Pod "pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009617007s
STEP: Saw pod success
Mar 11 19:00:56.856: INFO: Pod "pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1" satisfied condition "Succeeded or Failed"
Mar 11 19:00:56.858: INFO: Trying to get logs from node node2 pod pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1 container projected-secret-volume-test: 
STEP: delete the pod
Mar 11 19:00:56.873: INFO: Waiting for pod pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1 to disappear
Mar 11 19:00:56.875: INFO: Pod pod-projected-secrets-293b412f-5767-4075-97f2-6e79f2e27de1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:00:56.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6343" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1600,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:00:56.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8831
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Mar 11 19:01:01.547: INFO: Successfully updated pod "annotationupdate48f2a05f-dabe-4fb7-88b5-2d80e174ac36"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:01:03.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8831" for this suite.

• [SLOW TEST:6.689 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1603,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:01:03.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-8727
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8727
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 11 19:01:03.700: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 11 19:01:03.731: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 19:01:05.734: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 19:01:07.736: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:09.734: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:11.736: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:13.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:15.736: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:17.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:01:19.737: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 11 19:01:19.743: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 11 19:01:21.748: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 11 19:01:23.749: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 11 19:01:27.777: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.97:8080/dial?request=hostname&protocol=udp&host=10.244.3.95&port=8081&tries=1'] Namespace:pod-network-test-8727 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 19:01:27.777: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:01:27.888: INFO: Waiting for responses: map[]
Mar 11 19:01:27.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.97:8080/dial?request=hostname&protocol=udp&host=10.244.4.96&port=8081&tries=1'] Namespace:pod-network-test-8727 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 19:01:27.891: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:01:27.989: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:01:27.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8727" for this suite.

• [SLOW TEST:24.424 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:01:27.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-715
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-3e030789-559e-4163-821d-5d74eb050c58 in namespace container-probe-715
Mar 11 19:01:32.144: INFO: Started pod test-webserver-3e030789-559e-4163-821d-5d74eb050c58 in namespace container-probe-715
STEP: checking the pod's current state and verifying that restartCount is present
Mar 11 19:01:32.147: INFO: Initial restart count of pod test-webserver-3e030789-559e-4163-821d-5d74eb050c58 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:05:32.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-715" for this suite.

• [SLOW TEST:244.678 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1716,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:05:32.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9811
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9811
[It] should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-9811
Mar 11 19:05:32.810: INFO: Found 0 stateful pods, waiting for 1
Mar 11 19:05:42.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 11 19:05:42.829: INFO: Deleting all statefulset in ns statefulset-9811
Mar 11 19:05:42.832: INFO: Scaling statefulset ss to 0
Mar 11 19:06:02.846: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:06:02.849: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:06:02.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9811" for this suite.

• [SLOW TEST:30.188 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":105,"skipped":1733,"failed":0}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:06:02.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4914
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:06:03.007: INFO: Create a RollingUpdate DaemonSet
Mar 11 19:06:03.010: INFO: Check that daemon pods launch on every node of the cluster
Mar 11 19:06:03.014: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:03.014: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:03.014: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:03.016: INFO: Number of nodes with available pods: 0
Mar 11 19:06:03.016: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:04.021: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:04.021: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:04.021: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:04.024: INFO: Number of nodes with available pods: 0
Mar 11 19:06:04.024: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:05.024: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:05.024: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:05.024: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:05.027: INFO: Number of nodes with available pods: 0
Mar 11 19:06:05.027: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:06.022: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:06.022: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:06.023: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:06.026: INFO: Number of nodes with available pods: 0
Mar 11 19:06:06.026: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:07.023: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:07.023: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:07.023: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:07.025: INFO: Number of nodes with available pods: 1
Mar 11 19:06:07.025: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:08.022: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:08.022: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:08.023: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:08.027: INFO: Number of nodes with available pods: 1
Mar 11 19:06:08.027: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:09.023: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:09.023: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:09.023: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:09.028: INFO: Number of nodes with available pods: 2
Mar 11 19:06:09.028: INFO: Number of running nodes: 2, number of available pods: 2
Mar 11 19:06:09.028: INFO: Update the DaemonSet to trigger a rollout
Mar 11 19:06:09.034: INFO: Updating DaemonSet daemon-set
Mar 11 19:06:12.046: INFO: Roll back the DaemonSet before rollout is complete
Mar 11 19:06:12.052: INFO: Updating DaemonSet daemon-set
Mar 11 19:06:12.052: INFO: Make sure DaemonSet rollback is complete
Mar 11 19:06:12.054: INFO: Wrong image for pod: daemon-set-phpj7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Mar 11 19:06:12.054: INFO: Pod daemon-set-phpj7 is not available
Mar 11 19:06:12.058: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:12.058: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:12.058: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:13.064: INFO: Wrong image for pod: daemon-set-phpj7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Mar 11 19:06:13.064: INFO: Pod daemon-set-phpj7 is not available
Mar 11 19:06:13.068: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:13.068: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:13.068: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:14.062: INFO: Pod daemon-set-5jjsz is not available
Mar 11 19:06:14.066: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:14.066: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:14.067: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4914, will wait for the garbage collector to delete the pods
Mar 11 19:06:14.129: INFO: Deleting DaemonSet.extensions daemon-set took: 5.038247ms
Mar 11 19:06:14.229: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.365522ms
Mar 11 19:06:26.432: INFO: Number of nodes with available pods: 0
Mar 11 19:06:26.432: INFO: Number of running nodes: 0, number of available pods: 0
Mar 11 19:06:26.435: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4914/daemonsets","resourceVersion":"27181"},"items":null}

Mar 11 19:06:26.438: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4914/pods","resourceVersion":"27181"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:06:26.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4914" for this suite.

• [SLOW TEST:23.590 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":106,"skipped":1738,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:06:26.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5391
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 11 19:06:26.606: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:26.606: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:26.606: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:26.608: INFO: Number of nodes with available pods: 0
Mar 11 19:06:26.608: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:27.614: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:27.614: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:27.614: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:27.617: INFO: Number of nodes with available pods: 0
Mar 11 19:06:27.617: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:28.616: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:28.616: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:28.616: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:28.618: INFO: Number of nodes with available pods: 0
Mar 11 19:06:28.618: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:29.613: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:29.613: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:29.613: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:29.617: INFO: Number of nodes with available pods: 0
Mar 11 19:06:29.617: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:30.615: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:30.615: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:30.615: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:30.617: INFO: Number of nodes with available pods: 1
Mar 11 19:06:30.617: INFO: Node node1 is running more than one daemon pod
Mar 11 19:06:31.613: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.613: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.613: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.615: INFO: Number of nodes with available pods: 2
Mar 11 19:06:31.615: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 11 19:06:31.629: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.629: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.629: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:31.631: INFO: Number of nodes with available pods: 1
Mar 11 19:06:31.631: INFO: Node node2 is running more than one daemon pod
Mar 11 19:06:32.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:32.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:32.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:32.642: INFO: Number of nodes with available pods: 1
Mar 11 19:06:32.642: INFO: Node node2 is running more than one daemon pod
Mar 11 19:06:33.635: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:33.635: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:33.635: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:33.638: INFO: Number of nodes with available pods: 1
Mar 11 19:06:33.638: INFO: Node node2 is running more than one daemon pod
Mar 11 19:06:34.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:34.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:34.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:34.641: INFO: Number of nodes with available pods: 1
Mar 11 19:06:34.641: INFO: Node node2 is running more than one daemon pod
Mar 11 19:06:35.637: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:35.637: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:35.637: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:06:35.640: INFO: Number of nodes with available pods: 2
Mar 11 19:06:35.640: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5391, will wait for the garbage collector to delete the pods
Mar 11 19:06:35.704: INFO: Deleting DaemonSet.extensions daemon-set took: 6.271707ms
Mar 11 19:06:36.305: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.492478ms
Mar 11 19:06:46.508: INFO: Number of nodes with available pods: 0
Mar 11 19:06:46.508: INFO: Number of running nodes: 0, number of available pods: 0
Mar 11 19:06:46.510: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5391/daemonsets","resourceVersion":"27351"},"items":null}

Mar 11 19:06:46.512: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5391/pods","resourceVersion":"27351"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:06:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5391" for this suite.

• [SLOW TEST:20.072 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":107,"skipped":1749,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:06:46.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7005
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-f9339ff1-62df-49de-9e1a-1dc0e7f27e98
STEP: Creating secret with name secret-projected-all-test-volume-1bf118b6-2c8b-4466-829c-9ea32ea57bd0
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 11 19:06:46.674: INFO: Waiting up to 5m0s for pod "projected-volume-75fa73af-9768-4865-bae8-602687521622" in namespace "projected-7005" to be "Succeeded or Failed"
Mar 11 19:06:46.678: INFO: Pod "projected-volume-75fa73af-9768-4865-bae8-602687521622": Phase="Pending", Reason="", readiness=false. Elapsed: 3.124182ms
Mar 11 19:06:48.680: INFO: Pod "projected-volume-75fa73af-9768-4865-bae8-602687521622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005975605s
Mar 11 19:06:50.685: INFO: Pod "projected-volume-75fa73af-9768-4865-bae8-602687521622": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010090283s
Mar 11 19:06:52.690: INFO: Pod "projected-volume-75fa73af-9768-4865-bae8-602687521622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015218607s
STEP: Saw pod success
Mar 11 19:06:52.690: INFO: Pod "projected-volume-75fa73af-9768-4865-bae8-602687521622" satisfied condition "Succeeded or Failed"
Mar 11 19:06:52.692: INFO: Trying to get logs from node node2 pod projected-volume-75fa73af-9768-4865-bae8-602687521622 container projected-all-volume-test: 
STEP: delete the pod
Mar 11 19:06:52.714: INFO: Waiting for pod projected-volume-75fa73af-9768-4865-bae8-602687521622 to disappear
Mar 11 19:06:52.716: INFO: Pod projected-volume-75fa73af-9768-4865-bae8-602687521622 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:06:52.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7005" for this suite.

• [SLOW TEST:6.194 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1781,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:06:52.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-7027
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Mar 11 19:06:52.849: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:01.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7027" for this suite.

• [SLOW TEST:8.391 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":109,"skipped":1789,"failed":0}
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:01.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6016
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 11 19:07:01.251: INFO: Waiting up to 5m0s for pod "downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70" in namespace "downward-api-6016" to be "Succeeded or Failed"
Mar 11 19:07:01.255: INFO: Pod "downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070267ms
Mar 11 19:07:03.261: INFO: Pod "downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009274038s
Mar 11 19:07:05.267: INFO: Pod "downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015238498s
STEP: Saw pod success
Mar 11 19:07:05.267: INFO: Pod "downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70" satisfied condition "Succeeded or Failed"
Mar 11 19:07:05.269: INFO: Trying to get logs from node node1 pod downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70 container dapi-container: 
STEP: delete the pod
Mar 11 19:07:05.289: INFO: Waiting for pod downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70 to disappear
Mar 11 19:07:05.292: INFO: Pod downward-api-1f29331f-cfe5-47c7-a212-cb1e7f903d70 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6016" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1789,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:05.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1879
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:07:05.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109" in namespace "projected-1879" to be "Succeeded or Failed"
Mar 11 19:07:05.437: INFO: Pod "downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752929ms
Mar 11 19:07:07.442: INFO: Pod "downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006999591s
Mar 11 19:07:09.445: INFO: Pod "downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01053816s
STEP: Saw pod success
Mar 11 19:07:09.445: INFO: Pod "downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109" satisfied condition "Succeeded or Failed"
Mar 11 19:07:09.448: INFO: Trying to get logs from node node1 pod downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109 container client-container: 
STEP: delete the pod
Mar 11 19:07:09.467: INFO: Waiting for pod downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109 to disappear
Mar 11 19:07:09.469: INFO: Pod downwardapi-volume-2272a67c-ce92-4f2f-82f0-67c48e2f2109 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:09.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1879" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1801,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:09.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5328
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:07:10.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:07:12.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086430, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086430, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086430, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086430, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:07:15.579: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:15.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5328" for this suite.
STEP: Destroying namespace "webhook-5328-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.178 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":112,"skipped":1825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:15.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4582
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-bacf2f92-dc83-4b4c-9914-f571f994a85e
STEP: Creating a pod to test consume secrets
Mar 11 19:07:15.802: INFO: Waiting up to 5m0s for pod "pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03" in namespace "secrets-4582" to be "Succeeded or Failed"
Mar 11 19:07:15.804: INFO: Pod "pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564518ms
Mar 11 19:07:17.807: INFO: Pod "pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005065699s
Mar 11 19:07:19.813: INFO: Pod "pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010809785s
STEP: Saw pod success
Mar 11 19:07:19.813: INFO: Pod "pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03" satisfied condition "Succeeded or Failed"
Mar 11 19:07:19.815: INFO: Trying to get logs from node node2 pod pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03 container secret-volume-test: 
STEP: delete the pod
Mar 11 19:07:19.829: INFO: Waiting for pod pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03 to disappear
Mar 11 19:07:19.831: INFO: Pod pod-secrets-5189ccdb-30b2-4a25-ab4e-6a2ea54dfd03 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:19.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4582" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1864,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:19.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5780
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 11 19:07:19.977: INFO: Waiting up to 5m0s for pod "pod-1670d589-28d0-4f01-854f-598dd7d125df" in namespace "emptydir-5780" to be "Succeeded or Failed"
Mar 11 19:07:19.980: INFO: Pod "pod-1670d589-28d0-4f01-854f-598dd7d125df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2072ms
Mar 11 19:07:21.985: INFO: Pod "pod-1670d589-28d0-4f01-854f-598dd7d125df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007743715s
Mar 11 19:07:23.989: INFO: Pod "pod-1670d589-28d0-4f01-854f-598dd7d125df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012116677s
STEP: Saw pod success
Mar 11 19:07:23.989: INFO: Pod "pod-1670d589-28d0-4f01-854f-598dd7d125df" satisfied condition "Succeeded or Failed"
Mar 11 19:07:23.992: INFO: Trying to get logs from node node2 pod pod-1670d589-28d0-4f01-854f-598dd7d125df container test-container: 
STEP: delete the pod
Mar 11 19:07:24.005: INFO: Waiting for pod pod-1670d589-28d0-4f01-854f-598dd7d125df to disappear
Mar 11 19:07:24.007: INFO: Pod pod-1670d589-28d0-4f01-854f-598dd7d125df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:24.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5780" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1883,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:24.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8680
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:07:24.139: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:29.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8680" for this suite.

• [SLOW TEST:5.661 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":115,"skipped":1921,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:29.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-3814
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0311 19:07:39.861041      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:07:39.861: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:07:39.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3814" for this suite.

• [SLOW TEST:10.193 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":116,"skipped":1925,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:07:39.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4207
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:07:40.001: INFO: Pod name rollover-pod: Found 0 pods out of 1
Mar 11 19:07:45.005: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 11 19:07:47.010: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Mar 11 19:07:49.014: INFO: Creating deployment "test-rollover-deployment"
Mar 11 19:07:49.020: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Mar 11 19:07:51.026: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Mar 11 19:07:51.032: INFO: Ensure that both replica sets have 1 created replica
Mar 11 19:07:51.038: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Mar 11 19:07:51.044: INFO: Updating deployment test-rollover-deployment
Mar 11 19:07:51.044: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Mar 11 19:07:53.051: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Mar 11 19:07:53.057: INFO: Make sure deployment "test-rollover-deployment" is complete
Mar 11 19:07:53.063: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:07:53.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086471, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:07:55.070: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:07:55.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086474, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:07:57.069: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:07:57.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086474, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:07:59.069: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:07:59.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086474, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:08:01.069: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:08:01.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086474, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:08:03.071: INFO: all replica sets need to contain the pod-template-hash label
Mar 11 19:08:03.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086474, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086469, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:08:05.073: INFO: 
Mar 11 19:08:05.073: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Mar 11 19:08:05.081: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4207 /apis/apps/v1/namespaces/deployment-4207/deployments/test-rollover-deployment 59c04515-eef3-4c11-aa87-10de513d62c4 28267 2 2021-03-11 19:07:49 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2021-03-11 19:07:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:08:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a98d88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-11 19:07:49 +0000 UTC,LastTransitionTime:2021-03-11 19:07:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2021-03-11 19:08:04 +0000 UTC,LastTransitionTime:2021-03-11 19:07:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar 11 19:08:05.084: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-4207 /apis/apps/v1/namespaces/deployment-4207/replicasets/test-rollover-deployment-84f7f6f64b f436b660-57c1-421b-bb46-1d6f0b13e64e 28256 2 2021-03-11 19:07:51 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 59c04515-eef3-4c11-aa87-10de513d62c4 0xc000a998f7 0xc000a998f8}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:08:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 99 48 52 53 49 53 45 101 101 102 51 45 52 99 49 49 45 97 97 56 55 45 49 48 100 101 53 49 51 100 54 50 99 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a99f68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:08:05.084: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Mar 11 19:08:05.084: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4207 /apis/apps/v1/namespaces/deployment-4207/replicasets/test-rollover-controller 442fcefc-41b1-41a1-8097-bb03c38bf7e5 28265 2 2021-03-11 19:07:39 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 59c04515-eef3-4c11-aa87-10de513d62c4 0xc000a99407 0xc000a99408}] []  [{e2e.test Update apps/v1 2021-03-11 19:07:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:08:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 99 48 52 53 49 53 45 101 101 102 51 45 52 99 49 49 45 97 97 56 55 45 49 48 100 101 53 49 51 100 54 50 99 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000a99598  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:08:05.085: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-4207 /apis/apps/v1/namespaces/deployment-4207/replicasets/test-rollover-deployment-5686c4cfd5 134b415b-696d-4b1c-95cb-83b82381e081 28189 2 2021-03-11 19:07:49 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 59c04515-eef3-4c11-aa87-10de513d62c4 0xc000a996d7 0xc000a996d8}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:07:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 57 99 48 52 53 49 53 45 101 101 102 51 45 52 99 49 49 45 97 97 56 55 45 49 48 100 101 53 49 51 100 54 50 99 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a99808  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:08:05.088: INFO: Pod "test-rollover-deployment-84f7f6f64b-prlpf" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-prlpf test-rollover-deployment-84f7f6f64b- deployment-4207 /api/v1/namespaces/deployment-4207/pods/test-rollover-deployment-84f7f6f64b-prlpf b67eee6d-512d-4c7e-ae84-870d01388bea 28219 0 2021-03-11 19:07:51 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.115"
    ],
    "mac": "fa:fc:1d:94:38:02",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.115"
    ],
    "mac": "fa:fc:1d:94:38:02",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b f436b660-57c1-421b-bb46-1d6f0b13e64e 0xc00136c75f 0xc00136c770}] []  [{kube-controller-manager Update v1 2021-03-11 19:07:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 52 51 54 98 54 54 48 45 53 55 99 49 45 52 50 49 98 45 98 98 52 54 45 49 100 54 102 48 98 49 51 101 54 52 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:07:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:07:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 49 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tlb94,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tlb94,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tlb94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:07:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:07:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:07:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.115,StartTime:2021-03-11 19:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:07:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://106c645ede47ff6b4c94abfc334a79128fbf14a9dfe8f59201f090dadf09bdef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:05.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4207" for this suite.

• [SLOW TEST:25.225 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":117,"skipped":1961,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:05.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-8685
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Mar 11 19:08:05.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:27.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8685" for this suite.

• [SLOW TEST:22.033 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":118,"skipped":1990,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:27.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-836
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 11 19:08:27.266: INFO: Waiting up to 5m0s for pod "pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd" in namespace "emptydir-836" to be "Succeeded or Failed"
Mar 11 19:08:27.269: INFO: Pod "pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768084ms
Mar 11 19:08:29.272: INFO: Pod "pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006408087s
Mar 11 19:08:31.278: INFO: Pod "pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012575716s
STEP: Saw pod success
Mar 11 19:08:31.278: INFO: Pod "pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd" satisfied condition "Succeeded or Failed"
Mar 11 19:08:31.281: INFO: Trying to get logs from node node1 pod pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd container test-container: 
STEP: delete the pod
Mar 11 19:08:31.296: INFO: Waiting for pod pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd to disappear
Mar 11 19:08:31.298: INFO: Pod pod-6355dd90-d13a-4bcc-af82-637f5d9a4efd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:31.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-836" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:31.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2459
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-366ebfe3-a826-41a5-b81e-067e0ddbd93c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:37.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2459" for this suite.

• [SLOW TEST:6.174 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2030,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:37.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8561
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Mar 11 19:08:37.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8561'
Mar 11 19:08:37.928: INFO: stderr: ""
Mar 11 19:08:37.928: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 11 19:08:37.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8561'
Mar 11 19:08:38.091: INFO: stderr: ""
Mar 11 19:08:38.091: INFO: stdout: "update-demo-nautilus-4555k update-demo-nautilus-hkhnh "
Mar 11 19:08:38.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4555k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8561'
Mar 11 19:08:38.229: INFO: stderr: ""
Mar 11 19:08:38.229: INFO: stdout: ""
Mar 11 19:08:38.229: INFO: update-demo-nautilus-4555k is created but not running
Mar 11 19:08:43.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8561'
Mar 11 19:08:43.483: INFO: stderr: ""
Mar 11 19:08:43.483: INFO: stdout: "update-demo-nautilus-4555k update-demo-nautilus-hkhnh "
Mar 11 19:08:43.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4555k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8561'
Mar 11 19:08:43.646: INFO: stderr: ""
Mar 11 19:08:43.646: INFO: stdout: "true"
Mar 11 19:08:43.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4555k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8561'
Mar 11 19:08:43.805: INFO: stderr: ""
Mar 11 19:08:43.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:08:43.805: INFO: validating pod update-demo-nautilus-4555k
Mar 11 19:08:43.809: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:08:43.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:08:43.809: INFO: update-demo-nautilus-4555k is verified up and running
Mar 11 19:08:43.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkhnh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8561'
Mar 11 19:08:43.971: INFO: stderr: ""
Mar 11 19:08:43.971: INFO: stdout: "true"
Mar 11 19:08:43.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkhnh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8561'
Mar 11 19:08:44.130: INFO: stderr: ""
Mar 11 19:08:44.130: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:08:44.130: INFO: validating pod update-demo-nautilus-hkhnh
Mar 11 19:08:44.134: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:08:44.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:08:44.134: INFO: update-demo-nautilus-hkhnh is verified up and running
STEP: using delete to clean up resources
Mar 11 19:08:44.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8561'
Mar 11 19:08:44.259: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:08:44.259: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 11 19:08:44.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8561'
Mar 11 19:08:44.446: INFO: stderr: "No resources found in kubectl-8561 namespace.\n"
Mar 11 19:08:44.446: INFO: stdout: ""
Mar 11 19:08:44.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 11 19:08:44.599: INFO: stderr: ""
Mar 11 19:08:44.599: INFO: stdout: "update-demo-nautilus-4555k\nupdate-demo-nautilus-hkhnh\n"
Mar 11 19:08:45.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8561'
Mar 11 19:08:45.281: INFO: stderr: "No resources found in kubectl-8561 namespace.\n"
Mar 11 19:08:45.281: INFO: stdout: ""
Mar 11 19:08:45.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8561 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 11 19:08:45.441: INFO: stderr: ""
Mar 11 19:08:45.441: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:45.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8561" for this suite.

• [SLOW TEST:7.969 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":121,"skipped":2048,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:45.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8926
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:08:46.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:08:48.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086526, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086526, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086526, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086526, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:08:51.091: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Mar 11 19:08:55.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8926 to-be-attached-pod -i -c=container1'
Mar 11 19:08:55.290: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:08:55.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8926" for this suite.
STEP: Destroying namespace "webhook-8926-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.891 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":122,"skipped":2050,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:08:55.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4568
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:08:55.661: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:08:57.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086535, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086535, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086535, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751086535, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:09:00.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:10.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4568" for this suite.
STEP: Destroying namespace "webhook-4568-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.463 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":123,"skipped":2052,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:10.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-7785
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-7785
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 11 19:09:10.928: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 11 19:09:10.960: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 19:09:12.963: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 19:09:14.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:16.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:18.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:20.965: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:22.965: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:24.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 19:09:26.964: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 11 19:09:26.971: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 11 19:09:28.975: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 11 19:09:30.977: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 11 19:09:35.017: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.110 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7785 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 19:09:35.017: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:09:36.134: INFO: Found all expected endpoints: [netserver-0]
Mar 11 19:09:36.136: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.120 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7785 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 19:09:36.137: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:09:37.252: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:37.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7785" for this suite.

• [SLOW TEST:26.456 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2059,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:37.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6301
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:09:37.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666" in namespace "downward-api-6301" to be "Succeeded or Failed"
Mar 11 19:09:37.399: INFO: Pod "downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350812ms
Mar 11 19:09:39.404: INFO: Pod "downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007659719s
Mar 11 19:09:41.408: INFO: Pod "downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010958291s
STEP: Saw pod success
Mar 11 19:09:41.408: INFO: Pod "downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666" satisfied condition "Succeeded or Failed"
Mar 11 19:09:41.411: INFO: Trying to get logs from node node2 pod downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666 container client-container: 
STEP: delete the pod
Mar 11 19:09:41.432: INFO: Waiting for pod downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666 to disappear
Mar 11 19:09:41.434: INFO: Pod downwardapi-volume-8c0879bf-c6ca-4b48-9007-b87625890666 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6301" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2066,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:41.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2219
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:09:41.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107" in namespace "projected-2219" to be "Succeeded or Failed"
Mar 11 19:09:41.581: INFO: Pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429058ms
Mar 11 19:09:43.587: INFO: Pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009295443s
Mar 11 19:09:45.591: INFO: Pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01327051s
Mar 11 19:09:47.594: INFO: Pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016324557s
STEP: Saw pod success
Mar 11 19:09:47.594: INFO: Pod "downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107" satisfied condition "Succeeded or Failed"
Mar 11 19:09:47.597: INFO: Trying to get logs from node node2 pod downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107 container client-container: 
STEP: delete the pod
Mar 11 19:09:47.612: INFO: Waiting for pod downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107 to disappear
Mar 11 19:09:47.614: INFO: Pod downwardapi-volume-b561119d-7b06-48e0-bb6e-94e1b7365107 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2219" for this suite.

• [SLOW TEST:6.180 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2069,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:47.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4418
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-4c98cae3-1628-4cb8-b021-3a9ae2f45da4
STEP: Creating a pod to test consume configMaps
Mar 11 19:09:47.763: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45" in namespace "projected-4418" to be "Succeeded or Failed"
Mar 11 19:09:47.765: INFO: Pod "pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461481ms
Mar 11 19:09:49.769: INFO: Pod "pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006529668s
Mar 11 19:09:51.772: INFO: Pod "pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009519295s
STEP: Saw pod success
Mar 11 19:09:51.772: INFO: Pod "pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45" satisfied condition "Succeeded or Failed"
Mar 11 19:09:51.775: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:09:51.789: INFO: Waiting for pod pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45 to disappear
Mar 11 19:09:51.791: INFO: Pod pod-projected-configmaps-09b0c32b-f52c-4080-a259-342f93a20f45 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:51.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4418" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2073,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:51.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-932
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:160
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:09:51.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-932" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":128,"skipped":2091,"failed":0}
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:09:51.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-3298
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Mar 11 19:14:52.627: FAIL: Unexpected error:
    <*errors.errorString | 0xc000181fd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/auth.glob..func8.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:228 +0x789
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc00328ac00, 0x4afad60)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "svcaccounts-3298".
STEP: Found 8 events.
Mar 11 19:14:52.632: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {default-scheduler } Scheduled: Successfully assigned svcaccounts-3298/pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632 to node1
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:55 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {multus } AddedInterface: Add eth0 [10.244.3.113/24]
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:55 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29"
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:56 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:56 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} Failed: Error: ErrImagePull
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:56 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:09:56 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} Failed: Error: ImagePullBackOff
Mar 11 19:14:52.632: INFO: At 2021-03-11 19:10:34 +0000 UTC - event for pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:14:52.634: INFO: POD                                                       NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:14:52.634: INFO: pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632  node1  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:09:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:09:52 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:09:52 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:09:52 +0000 UTC  }]
Mar 11 19:14:52.634: INFO: 
Mar 11 19:14:52.638: INFO: 
Logging node info for node master1
Mar 11 19:14:52.641: INFO: Node Info: &Node{ObjectMeta:{master1   /api/v1/nodes/master1 bc51b401-422a-4e82-b449-caa7cdc72ceb 30325 0 2021-03-11 17:50:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0e:0e:ac:80:fe:e5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:14:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0cb21bb9b8b64bf38523b2f5a8bdad14,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:4a77cc46-4c80-409c-8c40-c24648f76e32,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5617799,},ContainerImage{Names:[alpine@sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be alpine:latest],SizeBytes:5613158,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:14:52.642: INFO: 
Logging kubelet events for node master1
Mar 11 19:14:52.644: INFO: 
Logging pods the kubelet thinks is on node master1
Mar 11 19:14:52.659: INFO: kube-proxy-bwz9p started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:14:52.659: INFO: kube-flannel-pzw7v started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Init container install-cni ready: true, restart count 2
Mar 11 19:14:52.659: INFO: 	Container kube-flannel ready: true, restart count 1
Mar 11 19:14:52.659: INFO: coredns-59dcc4799b-cp4vq started at 2021-03-11 17:53:08 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container coredns ready: true, restart count 1
Mar 11 19:14:52.659: INFO: kube-controller-manager-master1 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar 11 19:14:52.659: INFO: kube-apiserver-master1 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:14:52.659: INFO: kube-multus-ds-amd64-2jdtx started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:14:52.659: INFO: docker-registry-docker-registry-6d4484d8f9-pkjwp started at 2021-03-11 17:55:49 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container docker-registry ready: true, restart count 0
Mar 11 19:14:52.659: INFO: 	Container nginx ready: true, restart count 0
Mar 11 19:14:52.659: INFO: node-exporter-b54mc started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.659: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:14:52.659: INFO: kube-scheduler-master1 started at 2021-03-11 18:07:23 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.659: INFO: 	Container kube-scheduler ready: true, restart count 1
W0311 19:14:52.663823      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:14:52.692: INFO: 
Latency metrics for node master1
Mar 11 19:14:52.692: INFO: 
Logging node info for node master2
Mar 11 19:14:52.695: INFO: Node Info: &Node{ObjectMeta:{master2   /api/v1/nodes/master2 81d12a4f-6154-421a-896a-6071517cc7cf 30324 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8a:67:dc:b1:33:9d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:14:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234771968 0} {} 196518332Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361914368 0} {} 195665932Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:47 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:47 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:47 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:14:47 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b3061860c4ba472e9c76577f315c0ddb,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bc6d20a6-057d-4d5d-af80-cb65b29e2a9f,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:14:52.695: INFO: 
Logging kubelet events for node master2
Mar 11 19:14:52.698: INFO: 
Logging pods the kubelet thinks is on node master2
Mar 11 19:14:52.712: INFO: kube-apiserver-master2 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.712: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:14:52.712: INFO: kube-controller-manager-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.712: INFO: 	Container kube-controller-manager ready: true, restart count 2
Mar 11 19:14:52.712: INFO: kube-scheduler-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.713: INFO: 	Container kube-scheduler ready: true, restart count 2
Mar 11 19:14:52.713: INFO: kube-proxy-qg4j5 started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.713: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:14:52.713: INFO: kube-flannel-kfjhn started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:14:52.713: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:14:52.713: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:14:52.713: INFO: kube-multus-ds-amd64-xx6h7 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.713: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:14:52.713: INFO: node-exporter-j8bwb started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.713: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.713: INFO: 	Container node-exporter ready: true, restart count 0
W0311 19:14:52.717502      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:14:52.741: INFO: 
Latency metrics for node master2
Mar 11 19:14:52.741: INFO: 
Logging node info for node master3
Mar 11 19:14:52.744: INFO: Node Info: &Node{ObjectMeta:{master3   /api/v1/nodes/master3 2ec4f135-9e61-46a6-a537-0ad6199eddb1 30326 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4e:4a:32:07:d3:68"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:14:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:14:48 +0000 UTC,LastTransitionTime:2021-03-11 17:54:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4167bf4cb2634ca88fc2626bbda0ce42,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:52af946c-b482-4940-ad01-ee4a9a06c438,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:14:52.744: INFO: 
Logging kubelet events for node master3
Mar 11 19:14:52.747: INFO: 
Logging pods the kubelet thinks is on node master3
Mar 11 19:14:52.764: INFO: dns-autoscaler-66498f5c5f-m7mx4 started at 2021-03-11 17:53:11 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container autoscaler ready: true, restart count 1
Mar 11 19:14:52.764: INFO: coredns-59dcc4799b-cd6w4 started at 2021-03-11 17:53:13 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container coredns ready: true, restart count 2
Mar 11 19:14:52.764: INFO: kube-proxy-ktvzn started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:14:52.764: INFO: kube-flannel-fkd4q started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:14:52.764: INFO: 	Container kube-flannel ready: true, restart count 1
Mar 11 19:14:52.764: INFO: kube-multus-ds-amd64-94kvc started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:14:52.764: INFO: node-feature-discovery-controller-ccc948bcc-k5xj8 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container nfd-controller ready: true, restart count 0
Mar 11 19:14:52.764: INFO: node-exporter-xgq5j started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.764: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:14:52.764: INFO: kube-apiserver-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:14:52.764: INFO: kube-controller-manager-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-controller-manager ready: true, restart count 2
Mar 11 19:14:52.764: INFO: kube-scheduler-master3 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.764: INFO: 	Container kube-scheduler ready: true, restart count 2
W0311 19:14:52.768936      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:14:52.791: INFO: 
Latency metrics for node master3
Mar 11 19:14:52.791: INFO: 
Logging node info for node node1
Mar 11 19:14:52.793: INFO: Node Info: &Node{ObjectMeta:{node1   /api/v1/nodes/node1 09564b93-d658-496c-8cb0-ca1148040536 30312 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"9a:2f:67:81:a9:4b"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 51 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:03:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:14:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:43 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:43 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:43 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:14:43 +0000 UTC,LastTransitionTime:2021-03-11 17:58:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14aafcebb52e4debae4bcb2b7efb6066,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:87cad20c-59df-4889-8b1c-8831f7bcac2e,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[@ :],SizeBytes:1002423280,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:5ae9a5d4f882cae1ddfb3aeb6c5c6645df57e77e3bdaf9083c3cde45c7f9cbc2 golang:alpine3.12],SizeBytes:301038054,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a295107679b0d92cb70145fc18fb53c76e79fceed7e1cf10ed763c7c102c5ebe alpine:3.12],SizeBytes:5577287,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:14:52.794: INFO: 
Logging kubelet events for node node1
Mar 11 19:14:52.797: INFO: 
Logging pods the kubelet thinks is on node node1
Mar 11 19:14:52.815: INFO: collectd-4rvsd started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.815: INFO: cmk-s6v97 started at 2021-03-11 18:03:34 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:14:52.815: INFO: nginx-proxy-node1 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:14:52.815: INFO: node-feature-discovery-worker-nf56t started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:14:52.815: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:14:52.815: INFO: prometheus-k8s-0 started at 2021-03-11 18:04:37 +0000 UTC (0+5 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container grafana ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 19:14:52.815: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 19:14:52.815: INFO: kube-multus-ds-amd64-gtmmz started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:14:52.815: INFO: kube-flannel-8pz9c started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:14:52.815: INFO: cmk-init-discover-node2-29mrv started at 2021-03-11 18:03:13 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:14:52.815: INFO: 	Container init ready: false, restart count 0
Mar 11 19:14:52.815: INFO: 	Container install ready: false, restart count 0
Mar 11 19:14:52.815: INFO: cmk-webhook-888945845-2gpfq started at 2021-03-11 18:03:34 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 19:14:52.815: INFO: node-exporter-mw629 started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.815: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:14:52.815: INFO: pod-service-account-c2adb953-f81a-4c4f-bf95-7c4bc07a7632 started at 2021-03-11 19:09:52 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container test ready: false, restart count 0
Mar 11 19:14:52.815: INFO: kube-proxy-5zz5g started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.815: INFO: 	Container kube-proxy ready: true, restart count 2
W0311 19:14:52.819648      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:14:52.861: INFO: 
Latency metrics for node node1
Mar 11 19:14:52.861: INFO: 
Logging node info for node node2
Mar 11 19:14:52.863: INFO: Node Info: &Node{ObjectMeta:{node2   /api/v1/nodes/node2 48280382-daca-4d2c-a30b-cd693b7dd3e5 30319 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"02:6c:14:b4:02:16"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 52 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:01:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:14:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:46 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:46 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:14:46 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:14:46 +0000 UTC,LastTransitionTime:2021-03-11 17:58:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08627116483a4bf79f59d79a4a11d6f4,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:1be00882-edae-44a0-a65e-9f92c05d8856,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:14:52.865: INFO: 
Logging kubelet events for node node2
Mar 11 19:14:52.867: INFO: 
Logging pods the kubelet thinks is on node node2
Mar 11 19:14:52.890: INFO: cmk-init-discover-node1-vk7wm started at 2021-03-11 18:01:40 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container init ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container install ready: false, restart count 0
Mar 11 19:14:52.890: INFO: prometheus-operator-f66f5fb4d-f2pkm started at 2021-03-11 18:04:21 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container prometheus-operator ready: true, restart count 0
Mar 11 19:14:52.890: INFO: kube-multus-ds-amd64-rpm89 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:14:52.890: INFO: kube-proxy-znx8n started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:14:52.890: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Mar 11 19:14:52.890: INFO: cmk-init-discover-node2-c5j6h started at 2021-03-11 18:02:02 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container init ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container install ready: false, restart count 0
Mar 11 19:14:52.890: INFO: cmk-init-discover-node2-qbc6m started at 2021-03-11 18:02:53 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container init ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container install ready: false, restart count 0
Mar 11 19:14:52.890: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz started at 2021-03-11 18:07:22 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 19:14:52.890: INFO: nginx-proxy-node2 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:14:52.890: INFO: cmk-slzjv started at 2021-03-11 18:03:33 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:14:52.890: INFO: node-exporter-x6vqx started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:14:52.890: INFO: collectd-86ww6 started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:14:52.890: INFO: kube-flannel-8wwvj started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:14:52.890: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:14:52.890: INFO: node-feature-discovery-worker-8xdg7 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:14:52.890: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:14:52.890: INFO: cmk-init-discover-node2-9knwq started at 2021-03-11 18:02:23 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container init ready: false, restart count 0
Mar 11 19:14:52.890: INFO: 	Container install ready: false, restart count 0
Mar 11 19:14:52.890: INFO: kubernetes-dashboard-57777fbdcb-zsnff started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:14:52.890: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
W0311 19:14:52.894894      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:14:52.936: INFO: 
Latency metrics for node node2
Mar 11 19:14:52.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3298" for this suite.

• Failure [300.998 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

  Mar 11 19:14:52.627: Unexpected error:
      <*errors.errorString | 0xc000181fd0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:228
------------------------------
{"msg":"FAILED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":128,"skipped":2100,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:14:52.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9282
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-8f0a9972-c69e-4f74-ba44-9be2ab93c573
STEP: Creating a pod to test consume configMaps
Mar 11 19:14:53.082: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca" in namespace "projected-9282" to be "Succeeded or Failed"
Mar 11 19:14:53.085: INFO: Pod "pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172839ms
Mar 11 19:14:55.089: INFO: Pod "pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006843402s
Mar 11 19:14:57.093: INFO: Pod "pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010662304s
STEP: Saw pod success
Mar 11 19:14:57.093: INFO: Pod "pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca" satisfied condition "Succeeded or Failed"
Mar 11 19:14:57.096: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:14:57.109: INFO: Waiting for pod pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca to disappear
Mar 11 19:14:57.112: INFO: Pod pod-projected-configmaps-a7ffa43a-cfa4-4599-9381-8270921105ca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:14:57.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9282" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2104,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:14:57.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5914
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-df596384-fcf2-44b6-8414-7a93619f7149
STEP: Creating a pod to test consume configMaps
Mar 11 19:14:57.261: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d" in namespace "projected-5914" to be "Succeeded or Failed"
Mar 11 19:14:57.264: INFO: Pod "pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020152ms
Mar 11 19:14:59.268: INFO: Pod "pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007019947s
Mar 11 19:15:01.272: INFO: Pod "pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010676275s
STEP: Saw pod success
Mar 11 19:15:01.272: INFO: Pod "pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d" satisfied condition "Succeeded or Failed"
Mar 11 19:15:01.274: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:15:01.286: INFO: Waiting for pod pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d to disappear
Mar 11 19:15:01.288: INFO: Pod pod-projected-configmaps-5179869f-769b-4cdc-b3c6-f228475c7a5d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:15:01.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5914" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2138,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:15:01.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-148
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Mar 11 19:15:01.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Mar 11 19:15:01.597: INFO: stderr: ""
Mar 11 19:15:01.597: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:15:01.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-148" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":131,"skipped":2175,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:15:01.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1580
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-b5f9c294-7dd5-44f2-80e9-12b88e01b584
STEP: Creating a pod to test consume configMaps
Mar 11 19:15:01.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339" in namespace "configmap-1580" to be "Succeeded or Failed"
Mar 11 19:15:01.743: INFO: Pod "pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531265ms
Mar 11 19:15:03.749: INFO: Pod "pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008569093s
Mar 11 19:15:05.755: INFO: Pod "pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013941032s
STEP: Saw pod success
Mar 11 19:15:05.755: INFO: Pod "pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339" satisfied condition "Succeeded or Failed"
Mar 11 19:15:05.757: INFO: Trying to get logs from node node1 pod pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339 container configmap-volume-test: 
STEP: delete the pod
Mar 11 19:15:05.769: INFO: Waiting for pod pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339 to disappear
Mar 11 19:15:05.771: INFO: Pod pod-configmaps-8bb63cb0-768f-4302-b32c-ce7ac2bec339 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:15:05.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1580" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2197,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:15:05.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9276
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Mar 11 19:15:10.434: INFO: Successfully updated pod "labelsupdateee4a0d83-1038-4a14-a932-3899136bbe89"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:15:12.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9276" for this suite.

• [SLOW TEST:6.678 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2217,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:15:12.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2420
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-b32ce7d9-4d70-4975-a4c1-66cc08a08569
STEP: Creating a pod to test consume configMaps
Mar 11 19:15:12.597: INFO: Waiting up to 5m0s for pod "pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d" in namespace "configmap-2420" to be "Succeeded or Failed"
Mar 11 19:15:12.600: INFO: Pod "pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81823ms
Mar 11 19:15:14.603: INFO: Pod "pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005344674s
Mar 11 19:15:16.605: INFO: Pod "pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008109756s
STEP: Saw pod success
Mar 11 19:15:16.605: INFO: Pod "pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d" satisfied condition "Succeeded or Failed"
Mar 11 19:15:16.608: INFO: Trying to get logs from node node2 pod pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d container configmap-volume-test: 
STEP: delete the pod
Mar 11 19:15:16.620: INFO: Waiting for pod pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d to disappear
Mar 11 19:15:16.622: INFO: Pod pod-configmaps-30faf814-d2ad-4ce9-868e-7389e292c35d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:15:16.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2420" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2228,"failed":1,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:15:16.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5902
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:15:20.798: INFO: Waiting up to 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" in namespace "pods-5902" to be "Succeeded or Failed"
Mar 11 19:15:20.800: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004222ms
Mar 11 19:15:22.803: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005359285s
Mar 11 19:15:24.807: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008881208s
Mar 11 19:15:26.810: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012530009s
Mar 11 19:15:28.817: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019527789s
Mar 11 19:15:30.820: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02262043s
Mar 11 19:15:32.824: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026000748s
Mar 11 19:15:34.831: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033009477s
Mar 11 19:15:36.834: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036193561s
Mar 11 19:15:38.836: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 18.03882001s
Mar 11 19:15:40.839: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 20.041629685s
Mar 11 19:15:42.843: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045621374s
Mar 11 19:15:44.846: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 24.048488117s
Mar 11 19:15:46.850: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 26.052178481s
Mar 11 19:15:48.854: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 28.056255586s
Mar 11 19:15:50.858: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060062227s
Mar 11 19:15:52.862: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 32.064005317s
Mar 11 19:15:54.870: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 34.072721055s
Mar 11 19:15:56.873: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 36.075643118s
Mar 11 19:15:58.878: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 38.080199533s
Mar 11 19:16:00.882: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 40.084002531s
Mar 11 19:16:02.889: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 42.091556981s
Mar 11 19:16:04.897: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 44.099543073s
Mar 11 19:16:06.900: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 46.10277206s
Mar 11 19:16:08.908: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 48.11030147s
Mar 11 19:16:10.911: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 50.113538916s
Mar 11 19:16:12.915: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 52.116984555s
Mar 11 19:16:14.918: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 54.120660544s
Mar 11 19:16:16.922: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 56.124046423s
Mar 11 19:16:18.927: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 58.129034719s
Mar 11 19:16:20.930: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.132495448s
Mar 11 19:16:22.934: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.13619584s
Mar 11 19:16:24.938: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.139943661s
Mar 11 19:16:26.941: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.14331199s
Mar 11 19:16:28.945: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.147186744s
Mar 11 19:16:30.948: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.150213941s
Mar 11 19:16:32.952: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.154355987s
Mar 11 19:16:34.956: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.158149658s
Mar 11 19:16:36.959: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.161780109s
Mar 11 19:16:38.965: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.167762385s
Mar 11 19:16:40.968: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.170236218s
Mar 11 19:16:42.971: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.173789469s
Mar 11 19:16:44.977: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.179766573s
Mar 11 19:16:46.980: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.182734668s
Mar 11 19:16:48.984: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.186359207s
Mar 11 19:16:50.988: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.190499376s
Mar 11 19:16:52.991: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.193685675s
Mar 11 19:16:54.995: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.197217208s
Mar 11 19:16:56.998: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.200757297s
Mar 11 19:16:59.005: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.207039924s
Mar 11 19:17:01.009: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.211173519s
Mar 11 19:17:03.012: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.214018619s
Mar 11 19:17:05.015: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.21777967s
Mar 11 19:17:07.019: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.220934117s
Mar 11 19:17:09.025: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.226897942s
Mar 11 19:17:11.028: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.230634801s
Mar 11 19:17:13.031: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.233291686s
Mar 11 19:17:15.035: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.237039234s
Mar 11 19:17:17.038: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.240108676s
Mar 11 19:17:19.041: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.243530727s
Mar 11 19:17:21.044: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.246806936s
Mar 11 19:17:23.047: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.249241564s
Mar 11 19:17:25.053: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.255670915s
Mar 11 19:17:27.057: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.259181242s
Mar 11 19:17:29.060: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.262643133s
Mar 11 19:17:31.065: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.266999949s
Mar 11 19:17:33.068: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.27073396s
Mar 11 19:17:35.072: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.274467492s
Mar 11 19:17:37.075: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.277457119s
Mar 11 19:17:39.079: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.281321406s
Mar 11 19:17:41.082: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.28483072s
Mar 11 19:17:43.087: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.289339523s
Mar 11 19:17:45.090: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.292338278s
Mar 11 19:17:47.093: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.295481268s
Mar 11 19:17:49.098: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.299843251s
Mar 11 19:17:51.101: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.303514386s
Mar 11 19:17:53.104: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.306630952s
Mar 11 19:17:55.108: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.310143352s
Mar 11 19:17:57.111: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.313551144s
Mar 11 19:17:59.114: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.31674132s
Mar 11 19:18:01.118: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.320787944s
Mar 11 19:18:03.122: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.324219574s
Mar 11 19:18:05.128: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.330778075s
Mar 11 19:18:07.132: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.334654356s
Mar 11 19:18:09.135: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.337839023s
Mar 11 19:18:11.140: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.342269476s
Mar 11 19:18:13.144: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.345976094s
Mar 11 19:18:15.147: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.34940965s
Mar 11 19:18:17.151: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.353035109s
Mar 11 19:18:19.154: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.356588106s
Mar 11 19:18:21.158: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.360469107s
Mar 11 19:18:23.166: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.368153413s
Mar 11 19:18:25.172: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.37450194s
Mar 11 19:18:27.176: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.378244339s
Mar 11 19:18:29.179: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.381010893s
Mar 11 19:18:31.182: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.384249612s
Mar 11 19:18:33.185: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.38732838s
Mar 11 19:18:35.191: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.393124263s
Mar 11 19:18:37.196: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.398018081s
Mar 11 19:18:39.198: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.400811967s
Mar 11 19:18:41.203: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.405258118s
Mar 11 19:18:43.207: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.409256188s
Mar 11 19:18:45.211: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.412864511s
Mar 11 19:18:47.215: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.417175492s
Mar 11 19:18:49.219: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.420984039s
Mar 11 19:18:51.223: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.425123172s
Mar 11 19:18:53.227: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.429358219s
Mar 11 19:18:55.234: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.436446045s
Mar 11 19:18:57.238: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.440582725s
Mar 11 19:18:59.242: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.444761635s
Mar 11 19:19:01.246: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.448526915s
Mar 11 19:19:03.251: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.453626005s
Mar 11 19:19:05.254: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.456705993s
Mar 11 19:19:07.258: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.460149781s
Mar 11 19:19:09.261: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.463725882s
Mar 11 19:19:11.265: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.467286032s
Mar 11 19:19:13.269: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.471552826s
Mar 11 19:19:15.275: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.477047741s
Mar 11 19:19:17.278: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.480665106s
Mar 11 19:19:19.281: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.483808747s
Mar 11 19:19:21.284: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.486534259s
Mar 11 19:19:23.288: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.49078031s
Mar 11 19:19:25.292: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.494109476s
Mar 11 19:19:27.296: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.4979546s
Mar 11 19:19:29.299: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.501215838s
Mar 11 19:19:31.302: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.504139171s
Mar 11 19:19:33.305: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.507174661s
Mar 11 19:19:35.309: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.511139028s
Mar 11 19:19:37.312: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.514821943s
Mar 11 19:19:39.316: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.518569706s
Mar 11 19:19:41.320: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.521999788s
Mar 11 19:19:43.323: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.524924071s
Mar 11 19:19:45.330: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.532204448s
Mar 11 19:19:47.333: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.535462495s
Mar 11 19:19:49.338: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.540338081s
Mar 11 19:19:51.341: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.543799365s
Mar 11 19:19:53.345: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.547781622s
Mar 11 19:19:55.351: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.553811502s
Mar 11 19:19:57.356: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.557910514s
Mar 11 19:19:59.360: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.561839273s
Mar 11 19:20:01.363: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.565823739s
Mar 11 19:20:03.369: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.571614191s
Mar 11 19:20:05.373: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.575651662s
Mar 11 19:20:07.377: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.579115629s
Mar 11 19:20:09.380: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.582741274s
Mar 11 19:20:11.384: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.586162651s
Mar 11 19:20:13.387: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.589002798s
Mar 11 19:20:15.391: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.59300046s
Mar 11 19:20:17.395: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.596872162s
Mar 11 19:20:19.400: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.602672315s
Mar 11 19:20:21.414: INFO: Failed to get logs from node "node1" pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168)
STEP: delete the pod
Mar 11 19:20:21.419: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:20:21.422: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 still exists
Mar 11 19:20:23.425: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:20:23.428: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 still exists
Mar 11 19:20:25.423: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:20:25.426: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 no longer exists
Mar 11 19:20:25.426: INFO: (Attempt 1 of 3) Unexpected error occurred: expected pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" success: Gave up after waiting 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" to be "Succeeded or Failed"
Mar 11 19:20:25.438: INFO: Waiting up to 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" in namespace "pods-5902" to be "Succeeded or Failed"
Mar 11 19:20:25.441: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700366ms
Mar 11 19:20:27.444: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006315211s
Mar 11 19:20:29.447: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009437075s
Mar 11 19:20:31.452: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013684211s
Mar 11 19:20:33.455: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017396993s
Mar 11 19:20:35.463: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025253788s
Mar 11 19:20:37.466: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028467381s
Mar 11 19:20:39.470: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 14.03213319s
Mar 11 19:20:41.474: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036276842s
Mar 11 19:20:43.477: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039335207s
Mar 11 19:20:45.480: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04195337s
Mar 11 19:20:47.484: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045914574s
Mar 11 19:20:49.489: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 24.050670272s
Mar 11 19:20:51.492: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 26.054456088s
Mar 11 19:20:53.496: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 28.058163899s
Mar 11 19:20:55.500: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 30.062474896s
Mar 11 19:20:57.504: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 32.065533671s
Mar 11 19:20:59.508: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 34.069800186s
Mar 11 19:21:01.512: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 36.073965533s
Mar 11 19:21:03.518: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 38.079981226s
Mar 11 19:21:05.521: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 40.082745422s
Mar 11 19:21:07.524: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 42.086119019s
Mar 11 19:21:09.527: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 44.089430225s
Mar 11 19:21:11.531: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 46.093449428s
Mar 11 19:21:13.535: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 48.096995301s
Mar 11 19:21:15.540: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 50.101800578s
Mar 11 19:21:17.544: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 52.105615495s
Mar 11 19:21:19.547: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 54.108849399s
Mar 11 19:21:21.551: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 56.113487568s
Mar 11 19:21:23.558: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 58.120474431s
Mar 11 19:21:25.562: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.124092498s
Mar 11 19:21:27.566: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.127817939s
Mar 11 19:21:29.569: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.130572165s
Mar 11 19:21:31.571: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.133339961s
Mar 11 19:21:33.575: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.136805618s
Mar 11 19:21:35.578: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.140285644s
Mar 11 19:21:37.582: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.143622574s
Mar 11 19:21:39.587: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.148554986s
Mar 11 19:21:41.589: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.151436043s
Mar 11 19:21:43.592: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.154354822s
Mar 11 19:21:45.596: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.15838419s
Mar 11 19:21:47.600: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.162238891s
Mar 11 19:21:49.604: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.166296589s
Mar 11 19:21:51.608: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.169697342s
Mar 11 19:21:53.612: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.173717499s
Mar 11 19:21:55.620: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.182443584s
Mar 11 19:21:57.624: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.18599622s
Mar 11 19:21:59.628: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.190453184s
Mar 11 19:22:01.633: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.195245509s
Mar 11 19:22:03.637: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.199009301s
Mar 11 19:22:05.641: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.202858002s
Mar 11 19:22:07.645: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.206934102s
Mar 11 19:22:09.649: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.210738349s
Mar 11 19:22:11.653: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.214573342s
Mar 11 19:22:13.657: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.219261365s
Mar 11 19:22:15.661: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.222663259s
Mar 11 19:22:17.664: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.225522275s
Mar 11 19:22:19.666: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.228448476s
Mar 11 19:22:21.670: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.231644631s
Mar 11 19:22:23.673: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.234568581s
Mar 11 19:22:25.676: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.237704328s
Mar 11 19:22:27.679: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.240687728s
Mar 11 19:22:29.682: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.243722559s
Mar 11 19:22:31.685: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.247006936s
Mar 11 19:22:33.689: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.250763629s
Mar 11 19:22:35.692: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.254427904s
Mar 11 19:22:37.696: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.257937853s
Mar 11 19:22:39.700: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.261524415s
Mar 11 19:22:41.704: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.266163005s
Mar 11 19:22:43.707: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.268818637s
Mar 11 19:22:45.712: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.273519745s
Mar 11 19:22:47.715: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.276593519s
Mar 11 19:22:49.718: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.27952218s
Mar 11 19:22:51.721: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.283161823s
Mar 11 19:22:53.725: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.287326326s
Mar 11 19:22:55.733: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.295118376s
Mar 11 19:22:57.737: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.299435782s
Mar 11 19:22:59.740: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.302283367s
Mar 11 19:23:01.744: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.305785742s
Mar 11 19:23:03.749: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.310662304s
Mar 11 19:23:05.753: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.315312314s
Mar 11 19:23:07.759: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.320855414s
Mar 11 19:23:09.763: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.324809581s
Mar 11 19:23:11.767: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.328671754s
Mar 11 19:23:13.770: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.332420011s
Mar 11 19:23:15.773: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.335016616s
Mar 11 19:23:17.777: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.338845413s
Mar 11 19:23:19.781: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.343201549s
Mar 11 19:23:21.785: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.347414872s
Mar 11 19:23:23.793: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.354730995s
Mar 11 19:23:25.797: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.359001941s
Mar 11 19:23:27.801: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.362977804s
Mar 11 19:23:29.805: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.366807444s
Mar 11 19:23:31.809: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.371501074s
Mar 11 19:23:33.813: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.375373271s
Mar 11 19:23:35.817: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.37918222s
Mar 11 19:23:37.822: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.383758261s
Mar 11 19:23:39.825: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.386846545s
Mar 11 19:23:41.829: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.390887003s
Mar 11 19:23:43.832: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.394332561s
Mar 11 19:23:45.837: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.398677456s
Mar 11 19:23:47.841: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.40332709s
Mar 11 19:23:49.846: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.407807323s
Mar 11 19:23:51.850: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.411589815s
Mar 11 19:23:53.854: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.415789029s
Mar 11 19:23:55.857: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.419455328s
Mar 11 19:23:57.862: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.423980325s
Mar 11 19:23:59.866: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.428131907s
Mar 11 19:24:01.869: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.430677145s
Mar 11 19:24:03.874: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.436184971s
Mar 11 19:24:05.878: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.439595707s
Mar 11 19:24:07.882: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.444199115s
Mar 11 19:24:09.886: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.448351919s
Mar 11 19:24:11.890: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.452366835s
Mar 11 19:24:13.894: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.45572954s
Mar 11 19:24:15.897: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.458949482s
Mar 11 19:24:17.900: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.461537536s
Mar 11 19:24:19.902: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.464470398s
Mar 11 19:24:21.905: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.467374171s
Mar 11 19:24:23.908: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.470360006s
Mar 11 19:24:25.912: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.473697709s
Mar 11 19:24:27.916: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.478170063s
Mar 11 19:24:29.919: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.48117922s
Mar 11 19:24:31.924: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.485704377s
Mar 11 19:24:33.927: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.48944621s
Mar 11 19:24:35.931: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.492536918s
Mar 11 19:24:37.933: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.495392903s
Mar 11 19:24:39.936: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.497846579s
Mar 11 19:24:41.940: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.5019242s
Mar 11 19:24:43.943: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.505359466s
Mar 11 19:24:45.946: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.508101963s
Mar 11 19:24:47.950: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.511924798s
Mar 11 19:24:49.953: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.515292657s
Mar 11 19:24:51.957: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.519316064s
Mar 11 19:24:53.961: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.522880033s
Mar 11 19:24:55.966: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.527781971s
Mar 11 19:24:57.970: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.531906284s
Mar 11 19:24:59.974: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.536345824s
Mar 11 19:25:01.978: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.539908415s
Mar 11 19:25:03.981: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.543173085s
Mar 11 19:25:05.986: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.548322605s
Mar 11 19:25:07.991: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.552962532s
Mar 11 19:25:09.995: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.557490231s
Mar 11 19:25:12.000: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.562469284s
Mar 11 19:25:14.004: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.56615045s
Mar 11 19:25:16.008: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.570465849s
Mar 11 19:25:18.012: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.573568419s
Mar 11 19:25:20.014: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.576363419s
Mar 11 19:25:22.020: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.581660324s
Mar 11 19:25:24.024: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.586435644s
Mar 11 19:25:26.040: INFO: Failed to get logs from node "node1" pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168)
STEP: delete the pod
Mar 11 19:25:26.045: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:25:26.047: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 still exists
Mar 11 19:25:28.048: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:25:28.051: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 no longer exists
Mar 11 19:25:28.051: INFO: (Attempt 2 of 3) Unexpected error occurred: expected pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" success: Gave up after waiting 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" to be "Succeeded or Failed"
Mar 11 19:25:28.064: INFO: Waiting up to 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" in namespace "pods-5902" to be "Succeeded or Failed"
Mar 11 19:25:28.067: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710654ms
Mar 11 19:25:30.072: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007343707s
Mar 11 19:25:32.076: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01174345s
Mar 11 19:25:34.080: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016136524s
Mar 11 19:25:36.086: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021296335s
Mar 11 19:25:38.088: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024150895s
Mar 11 19:25:40.094: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029423663s
Mar 11 19:25:42.096: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032270006s
Mar 11 19:25:44.100: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 16.035488681s
Mar 11 19:25:46.103: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 18.038364367s
Mar 11 19:25:48.107: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 20.042408621s
Mar 11 19:25:50.110: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 22.045932293s
Mar 11 19:25:52.114: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 24.050026487s
Mar 11 19:25:54.119: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 26.05479375s
Mar 11 19:25:56.123: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 28.059233183s
Mar 11 19:25:58.127: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 30.062293092s
Mar 11 19:26:00.132: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 32.067421163s
Mar 11 19:26:02.135: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 34.070737671s
Mar 11 19:26:04.142: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 36.077792575s
Mar 11 19:26:06.147: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 38.082392058s
Mar 11 19:26:08.151: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 40.086893052s
Mar 11 19:26:10.157: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 42.0925439s
Mar 11 19:26:12.161: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 44.096876307s
Mar 11 19:26:14.164: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 46.100135304s
Mar 11 19:26:16.169: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 48.104336766s
Mar 11 19:26:18.173: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 50.109189602s
Mar 11 19:26:20.178: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 52.113508035s
Mar 11 19:26:22.182: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 54.117969254s
Mar 11 19:26:24.188: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 56.12329886s
Mar 11 19:26:26.192: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 58.128016233s
Mar 11 19:26:28.196: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.13154271s
Mar 11 19:26:30.200: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.136192363s
Mar 11 19:26:32.204: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.140159806s
Mar 11 19:26:34.209: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.145149695s
Mar 11 19:26:36.212: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.148098453s
Mar 11 19:26:38.216: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.151387418s
Mar 11 19:26:40.220: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.155327872s
Mar 11 19:26:42.223: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.15918408s
Mar 11 19:26:44.229: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.165084284s
Mar 11 19:26:46.232: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.168022384s
Mar 11 19:26:48.237: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.173195956s
Mar 11 19:26:50.242: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.177707942s
Mar 11 19:26:52.246: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.182039565s
Mar 11 19:26:54.252: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.187289894s
Mar 11 19:26:56.256: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.191929519s
Mar 11 19:26:58.261: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.196855757s
Mar 11 19:27:00.264: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.200145881s
Mar 11 19:27:02.267: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.202783727s
Mar 11 19:27:04.272: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.20791699s
Mar 11 19:27:06.276: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.211284412s
Mar 11 19:27:08.280: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.21554396s
Mar 11 19:27:10.285: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.221010163s
Mar 11 19:27:12.289: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.224362558s
Mar 11 19:27:14.293: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.228606514s
Mar 11 19:27:16.297: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.233215624s
Mar 11 19:27:18.302: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.237688925s
Mar 11 19:27:20.306: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.241596917s
Mar 11 19:27:22.310: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.246065247s
Mar 11 19:27:24.315: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.251064835s
Mar 11 19:27:26.320: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.255295293s
Mar 11 19:27:28.323: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.25913188s
Mar 11 19:27:30.329: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.264393272s
Mar 11 19:27:32.333: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.268934592s
Mar 11 19:27:34.338: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.273788244s
Mar 11 19:27:36.341: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.277158636s
Mar 11 19:27:38.346: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.281559988s
Mar 11 19:27:40.351: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.286477282s
Mar 11 19:27:42.355: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.291000876s
Mar 11 19:27:44.359: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.295126328s
Mar 11 19:27:46.364: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.300156743s
Mar 11 19:27:48.369: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.305093363s
Mar 11 19:27:50.373: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.308296059s
Mar 11 19:27:52.378: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.313541645s
Mar 11 19:27:54.383: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.31830479s
Mar 11 19:27:56.388: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.3235818s
Mar 11 19:27:58.393: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.328685912s
Mar 11 19:28:00.398: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.333327346s
Mar 11 19:28:02.401: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.33695687s
Mar 11 19:28:04.406: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.342272117s
Mar 11 19:28:06.410: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.345717555s
Mar 11 19:28:08.415: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.35110383s
Mar 11 19:28:10.419: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.355107828s
Mar 11 19:28:12.422: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.357961208s
Mar 11 19:28:14.425: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.361176741s
Mar 11 19:28:16.428: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.364242814s
Mar 11 19:28:18.431: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.36713471s
Mar 11 19:28:20.438: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.374046896s
Mar 11 19:28:22.442: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.377767192s
Mar 11 19:28:24.447: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.382520481s
Mar 11 19:28:26.451: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.386860838s
Mar 11 19:28:28.456: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.391274163s
Mar 11 19:28:30.459: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.394522404s
Mar 11 19:28:32.463: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.398675094s
Mar 11 19:28:34.468: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.40371279s
Mar 11 19:28:36.471: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.40711559s
Mar 11 19:28:38.476: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.411937294s
Mar 11 19:28:40.480: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.415382399s
Mar 11 19:28:42.485: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.42108652s
Mar 11 19:28:44.490: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.426054608s
Mar 11 19:28:46.493: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.429246382s
Mar 11 19:28:48.497: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.432924556s
Mar 11 19:28:50.500: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.435859139s
Mar 11 19:28:52.504: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.439927831s
Mar 11 19:28:54.507: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.443239444s
Mar 11 19:28:56.510: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.446236253s
Mar 11 19:28:58.514: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.450046301s
Mar 11 19:29:00.517: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.452691103s
Mar 11 19:29:02.522: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.457463675s
Mar 11 19:29:04.526: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.462025589s
Mar 11 19:29:06.533: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.468675944s
Mar 11 19:29:08.539: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.474314282s
Mar 11 19:29:10.541: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.477079292s
Mar 11 19:29:12.545: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.481151551s
Mar 11 19:29:14.551: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.48664379s
Mar 11 19:29:16.555: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.491223196s
Mar 11 19:29:18.561: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.496323544s
Mar 11 19:29:20.565: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.501037884s
Mar 11 19:29:22.570: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.505877169s
Mar 11 19:29:24.575: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.511074197s
Mar 11 19:29:26.580: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.51619184s
Mar 11 19:29:28.584: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.519333278s
Mar 11 19:29:30.589: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.525028196s
Mar 11 19:29:32.594: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.530076986s
Mar 11 19:29:34.599: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.534752688s
Mar 11 19:29:36.602: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.53818977s
Mar 11 19:29:38.607: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.542927781s
Mar 11 19:29:40.613: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.548424009s
Mar 11 19:29:42.618: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.553596109s
Mar 11 19:29:44.624: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.559415484s
Mar 11 19:29:46.630: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.566095991s
Mar 11 19:29:48.636: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.571438779s
Mar 11 19:29:50.641: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.576397897s
Mar 11 19:29:52.647: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.58277351s
Mar 11 19:29:54.653: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.588833601s
Mar 11 19:29:56.659: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.594498851s
Mar 11 19:29:58.664: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.600253425s
Mar 11 19:30:00.670: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.605775913s
Mar 11 19:30:02.675: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.610838869s
Mar 11 19:30:04.679: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.615231991s
Mar 11 19:30:06.685: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.620784077s
Mar 11 19:30:08.691: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.626915812s
Mar 11 19:30:10.696: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.63167857s
Mar 11 19:30:12.700: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.636068298s
Mar 11 19:30:14.705: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.64033845s
Mar 11 19:30:16.711: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.646674892s
Mar 11 19:30:18.714: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.65027248s
Mar 11 19:30:20.720: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.655892082s
Mar 11 19:30:22.725: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.661112805s
Mar 11 19:30:24.730: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.66615781s
Mar 11 19:30:26.735: INFO: Pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.670767398s
Mar 11 19:30:28.750: INFO: Failed to get logs from node "node1" pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168)
STEP: delete the pod
Mar 11 19:30:28.756: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:30:28.760: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 still exists
Mar 11 19:30:30.761: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:30:30.763: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 still exists
Mar 11 19:30:32.761: INFO: Waiting for pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to disappear
Mar 11 19:30:32.763: INFO: Pod client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 no longer exists
Mar 11 19:30:32.763: INFO: (Attempt 3 of 3) Unexpected error occurred: expected pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" success: Gave up after waiting 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" to be "Succeeded or Failed"
goroutine 215 [running]:
runtime/debug.Stack(0x4, 0x4915cde, 0x2)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
	/usr/local/go/src/runtime/debug/stack.go:16 +0x22
k8s.io/kubernetes/test/e2e/common.expectNoErrorWithRetries(0xc002b5b1f0, 0x3, 0xc003bd56a0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:170 +0x2ca
k8s.io/kubernetes/test/e2e/common.glob..func18.6()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:543 +0xa61
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000b191a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xb8
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000b191a0, 0xc000198380, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0009327a0, 0x51d2300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x64
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001ae71d0, 0x0, 0x51d2300, 0xc0001c68c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x5b5
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001ae71d0, 0x51d2300, 0xc0001c68c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0x101
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc002366280, 0xc001ae71d0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc002366280, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x120
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc002366280, 0xc00235c1a8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001b02d0, 0x7f0d56fdecd0, 0xc00328ac00, 0x495020e, 0x14, 0xc00286ef90, 0x3, 0x3, 0x529bc60, 0xc0001c68c0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42b
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x51d6a40, 0xc00328ac00, 0x495020e, 0x14, 0xc0013d7b40, 0x3, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x217
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x51d6a40, 0xc00328ac00, 0x495020e, 0x14, 0xc001317e80, 0x2, 0x2, 0x2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc00328ac00, 0x4afad60)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
Mar 11 19:30:32.764: FAIL: Container should have service environment variables set
Unexpected error:
    <*errors.errorString | 0xc002ba7e70>: {
        s: "expected pod \"client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168\" success: Gave up after waiting 5m0s for pod \"client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168\" to be \"Succeeded or Failed\"",
    }
    expected pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" success: Gave up after waiting 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/common.glob..func18.6()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:543 +0xa61
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc00328ac00, 0x4afad60)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "pods-5902".
STEP: Found 34 events.
Mar 11 19:30:32.769: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {default-scheduler } Scheduled: Successfully assigned pods-5902/client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to node1
Mar 11 19:30:32.769: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {default-scheduler } Scheduled: Successfully assigned pods-5902/client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to node1
Mar 11 19:30:32.769: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {default-scheduler } Scheduled: Successfully assigned pods-5902/client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168 to node1
Mar 11 19:30:32.769: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {default-scheduler } Scheduled: Successfully assigned pods-5902/server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4 to node2
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:18 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {kubelet node2} Pulled: Successfully pulled image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:18 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {kubelet node2} Pulling: Pulling image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:18 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {multus } AddedInterface: Add eth0 [10.244.4.125/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:19 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {kubelet node2} Created: Created container srv
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:19 +0000 UTC - event for server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4: {kubelet node2} Started: Started container srv
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:22 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:22 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.118/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:23 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ErrImagePull
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:23 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:24 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:26 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ImagePullBackOff
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:26 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:15:26 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.119/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:16:06 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:27 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:27 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.120/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:28 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ErrImagePull
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:28 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:28 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:30 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.121/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:30 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ImagePullBackOff
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:20:30 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:25:29 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.122/24]
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:25:29 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.769: INFO: At 2021-03-11 19:25:30 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ErrImagePull
Mar 11 19:30:32.770: INFO: At 2021-03-11 19:25:30 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Mar 11 19:30:32.770: INFO: At 2021-03-11 19:25:31 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Mar 11 19:30:32.770: INFO: At 2021-03-11 19:25:33 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {multus } AddedInterface: Add eth0 [10.244.3.123/24]
Mar 11 19:30:32.770: INFO: At 2021-03-11 19:25:33 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Mar 11 19:30:32.770: INFO: At 2021-03-11 19:25:33 +0000 UTC - event for client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168: {kubelet node1} Failed: Error: ImagePullBackOff
Mar 11 19:30:32.773: INFO: POD                                                  NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:30:32.773: INFO: server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:15:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:15:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:15:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:15:16 +0000 UTC  }]
Mar 11 19:30:32.773: INFO: 
Mar 11 19:30:32.778: INFO: 
Logging node info for node master1
Mar 11 19:30:32.780: INFO: Node Info: &Node{ObjectMeta:{master1   /api/v1/nodes/master1 bc51b401-422a-4e82-b449-caa7cdc72ceb 33975 0 2021-03-11 17:50:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0e:0e:ac:80:fe:e5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:30:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0cb21bb9b8b64bf38523b2f5a8bdad14,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:4a77cc46-4c80-409c-8c40-c24648f76e32,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5617799,},ContainerImage{Names:[alpine@sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be alpine:latest],SizeBytes:5613158,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:30:32.781: INFO: 
Logging kubelet events for node master1
Mar 11 19:30:32.783: INFO: 
Logging pods the kubelet thinks is on node master1
Mar 11 19:30:32.798: INFO: docker-registry-docker-registry-6d4484d8f9-pkjwp started at 2021-03-11 17:55:49 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container docker-registry ready: true, restart count 0
Mar 11 19:30:32.798: INFO: 	Container nginx ready: true, restart count 0
Mar 11 19:30:32.798: INFO: node-exporter-b54mc started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:32.798: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:30:32.798: INFO: kube-scheduler-master1 started at 2021-03-11 18:07:23 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar 11 19:30:32.798: INFO: kube-apiserver-master1 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:30:32.798: INFO: kube-multus-ds-amd64-2jdtx started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:30:32.798: INFO: coredns-59dcc4799b-cp4vq started at 2021-03-11 17:53:08 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container coredns ready: true, restart count 1
Mar 11 19:30:32.798: INFO: kube-controller-manager-master1 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar 11 19:30:32.798: INFO: kube-proxy-bwz9p started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:30:32.798: INFO: kube-flannel-pzw7v started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:30:32.798: INFO: 	Init container install-cni ready: true, restart count 2
Mar 11 19:30:32.798: INFO: 	Container kube-flannel ready: true, restart count 1
W0311 19:30:32.803120      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:30:32.827: INFO: 
Latency metrics for node master1
Mar 11 19:30:32.827: INFO: 
Logging node info for node master2
Mar 11 19:30:32.831: INFO: Node Info: &Node{ObjectMeta:{master2   /api/v1/nodes/master2 81d12a4f-6154-421a-896a-6071517cc7cf 33973 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8a:67:dc:b1:33:9d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:30:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234771968 0} {} 196518332Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361914368 0} {} 195665932Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:30:31 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b3061860c4ba472e9c76577f315c0ddb,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bc6d20a6-057d-4d5d-af80-cb65b29e2a9f,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:30:32.832: INFO: 
Logging kubelet events for node master2
Mar 11 19:30:32.834: INFO: 
Logging pods the kubelet thinks is on node master2
Mar 11 19:30:32.848: INFO: kube-scheduler-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-scheduler ready: true, restart count 2
Mar 11 19:30:32.848: INFO: kube-proxy-qg4j5 started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:30:32.848: INFO: kube-flannel-kfjhn started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:30:32.848: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:30:32.848: INFO: kube-multus-ds-amd64-xx6h7 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:30:32.848: INFO: node-exporter-j8bwb started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:32.848: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:30:32.848: INFO: kube-apiserver-master2 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:30:32.848: INFO: kube-controller-manager-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.848: INFO: 	Container kube-controller-manager ready: true, restart count 2
W0311 19:30:32.852234      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:30:32.881: INFO: 
Latency metrics for node master2
Mar 11 19:30:32.881: INFO: 
Logging node info for node master3
Mar 11 19:30:32.883: INFO: Node Info: &Node{ObjectMeta:{master3   /api/v1/nodes/master3 2ec4f135-9e61-46a6-a537-0ad6199eddb1 33987 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4e:4a:32:07:d3:68"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:30:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:32 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:32 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:32 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:30:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4167bf4cb2634ca88fc2626bbda0ce42,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:52af946c-b482-4940-ad01-ee4a9a06c438,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:30:32.884: INFO: 
Logging kubelet events for node master3
Mar 11 19:30:32.886: INFO: 
Logging pods the kubelet thinks is on node master3
Mar 11 19:30:32.900: INFO: kube-multus-ds-amd64-94kvc started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:30:32.900: INFO: dns-autoscaler-66498f5c5f-m7mx4 started at 2021-03-11 17:53:11 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container autoscaler ready: true, restart count 1
Mar 11 19:30:32.900: INFO: coredns-59dcc4799b-cd6w4 started at 2021-03-11 17:53:13 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container coredns ready: true, restart count 2
Mar 11 19:30:32.900: INFO: kube-proxy-ktvzn started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:30:32.900: INFO: kube-flannel-fkd4q started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:30:32.900: INFO: 	Container kube-flannel ready: true, restart count 1
Mar 11 19:30:32.900: INFO: kube-scheduler-master3 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container kube-scheduler ready: true, restart count 2
Mar 11 19:30:32.900: INFO: node-feature-discovery-controller-ccc948bcc-k5xj8 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container nfd-controller ready: true, restart count 0
Mar 11 19:30:32.900: INFO: node-exporter-xgq5j started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:32.900: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:30:32.900: INFO: kube-apiserver-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.900: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 19:30:32.901: INFO: kube-controller-manager-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.901: INFO: 	Container kube-controller-manager ready: true, restart count 2
W0311 19:30:32.904876      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:30:32.929: INFO: 
Latency metrics for node master3
Mar 11 19:30:32.929: INFO: 
Logging node info for node node1
Mar 11 19:30:32.932: INFO: Node Info: &Node{ObjectMeta:{node1   /api/v1/nodes/node1 09564b93-d658-496c-8cb0-ca1148040536 33958 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"9a:2f:67:81:a9:4b"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 51 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:03:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:30:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:27 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:27 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:27 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:30:27 +0000 UTC,LastTransitionTime:2021-03-11 17:58:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14aafcebb52e4debae4bcb2b7efb6066,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:87cad20c-59df-4889-8b1c-8831f7bcac2e,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[@ :],SizeBytes:1002423280,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:5ae9a5d4f882cae1ddfb3aeb6c5c6645df57e77e3bdaf9083c3cde45c7f9cbc2 golang:alpine3.12],SizeBytes:301038054,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a295107679b0d92cb70145fc18fb53c76e79fceed7e1cf10ed763c7c102c5ebe alpine:3.12],SizeBytes:5577287,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:30:32.933: INFO: 
Logging kubelet events for node node1
Mar 11 19:30:32.935: INFO: 
Logging pods the kubelet thinks is on node node1
Mar 11 19:30:32.948: INFO: nginx-proxy-node1 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:30:32.948: INFO: cmk-s6v97 started at 2021-03-11 18:03:34 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:30:32.948: INFO: kube-multus-ds-amd64-gtmmz started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:30:32.948: INFO: node-feature-discovery-worker-nf56t started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:30:32.948: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:30:32.948: INFO: prometheus-k8s-0 started at 2021-03-11 18:04:37 +0000 UTC (0+5 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container grafana ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 19:30:32.948: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 19:30:32.948: INFO: node-exporter-mw629 started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:30:32.948: INFO: kube-proxy-5zz5g started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container kube-proxy ready: true, restart count 2
Mar 11 19:30:32.948: INFO: kube-flannel-8pz9c started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:30:32.948: INFO: cmk-init-discover-node2-29mrv started at 2021-03-11 18:03:13 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:30:32.948: INFO: 	Container init ready: false, restart count 0
Mar 11 19:30:32.948: INFO: 	Container install ready: false, restart count 0
Mar 11 19:30:32.948: INFO: cmk-webhook-888945845-2gpfq started at 2021-03-11 18:03:34 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 19:30:32.948: INFO: collectd-4rvsd started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:32.948: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:30:32.948: INFO: 	Container rbac-proxy ready: true, restart count 0
W0311 19:30:32.952563      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:30:32.985: INFO: 
Latency metrics for node node1
Mar 11 19:30:32.985: INFO: 
Logging node info for node node2
Mar 11 19:30:32.988: INFO: Node Info: &Node{ObjectMeta:{node2   /api/v1/nodes/node2 48280382-daca-4d2c-a30b-cd693b7dd3e5 33972 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"02:6c:14:b4:02:16"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 52 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:01:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:30:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:30 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:30 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 19:30:30 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 19:30:30 +0000 UTC,LastTransitionTime:2021-03-11 17:58:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08627116483a4bf79f59d79a4a11d6f4,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:1be00882-edae-44a0-a65e-9f92c05d8856,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 19:30:32.989: INFO: 
Logging kubelet events for node node2
Mar 11 19:30:32.991: INFO: 
Logging pods the kubelet thinks is on node node2
Mar 11 19:30:33.014: INFO: nginx-proxy-node2 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:30:33.014: INFO: kube-proxy-znx8n started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:30:33.014: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Mar 11 19:30:33.014: INFO: cmk-init-discover-node2-c5j6h started at 2021-03-11 18:02:02 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container init ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container install ready: false, restart count 0
Mar 11 19:30:33.014: INFO: cmk-init-discover-node2-qbc6m started at 2021-03-11 18:02:53 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container init ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container install ready: false, restart count 0
Mar 11 19:30:33.014: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz started at 2021-03-11 18:07:22 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 19:30:33.014: INFO: kube-flannel-8wwvj started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:30:33.014: INFO: cmk-slzjv started at 2021-03-11 18:03:33 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:30:33.014: INFO: node-exporter-x6vqx started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:30:33.014: INFO: collectd-86ww6 started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:30:33.014: INFO: server-envvars-e2e193d6-5b6c-435c-aaea-3ce0222445a4 started at 2021-03-11 19:15:16 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container srv ready: true, restart count 0
Mar 11 19:30:33.014: INFO: kubernetes-dashboard-57777fbdcb-zsnff started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Mar 11 19:30:33.014: INFO: node-feature-discovery-worker-8xdg7 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:30:33.014: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:30:33.014: INFO: cmk-init-discover-node2-9knwq started at 2021-03-11 18:02:23 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container init ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container install ready: false, restart count 0
Mar 11 19:30:33.014: INFO: kube-multus-ds-amd64-rpm89 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:30:33.014: INFO: cmk-init-discover-node1-vk7wm started at 2021-03-11 18:01:40 +0000 UTC (0+3 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container init ready: false, restart count 0
Mar 11 19:30:33.014: INFO: 	Container install ready: false, restart count 0
Mar 11 19:30:33.014: INFO: prometheus-operator-f66f5fb4d-f2pkm started at 2021-03-11 18:04:21 +0000 UTC (0+2 container statuses recorded)
Mar 11 19:30:33.014: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:30:33.014: INFO: 	Container prometheus-operator ready: true, restart count 0
W0311 19:30:33.018816      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:30:33.051: INFO: 
Latency metrics for node node2
Mar 11 19:30:33.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5902" for this suite.

• Failure [916.426 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

  Mar 11 19:30:32.764: Container should have service environment variables set
  Unexpected error:
      <*errors.errorString | 0xc002ba7e70>: {
          s: "expected pod \"client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168\" success: Gave up after waiting 5m0s for pod \"client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168\" to be \"Succeeded or Failed\"",
      }
      expected pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" success: Gave up after waiting 5m0s for pod "client-envvars-1b71df83-8c86-45e4-986a-cbbff37fc168" to be "Succeeded or Failed"
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:543
------------------------------
{"msg":"FAILED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2262,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:30:33.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7531
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 11 19:33:37.979: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:33:37.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7531" for this suite.

• [SLOW TEST:184.939 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2267,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:33:37.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9234
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:33:38.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440" in namespace "downward-api-9234" to be "Succeeded or Failed"
Mar 11 19:33:38.135: INFO: Pod "downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.779438ms
Mar 11 19:33:40.138: INFO: Pod "downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005982675s
Mar 11 19:33:42.142: INFO: Pod "downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009087967s
STEP: Saw pod success
Mar 11 19:33:42.142: INFO: Pod "downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440" satisfied condition "Succeeded or Failed"
Mar 11 19:33:42.145: INFO: Trying to get logs from node node2 pod downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440 container client-container: 
STEP: delete the pod
Mar 11 19:33:42.165: INFO: Waiting for pod downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440 to disappear
Mar 11 19:33:42.167: INFO: Pod downwardapi-volume-78c65c95-fbaf-400d-9065-8e705ac3a440 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:33:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9234" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2272,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:33:42.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2279
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:33:53.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2279" for this suite.

• [SLOW TEST:11.160 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":137,"skipped":2276,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:33:53.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-6345
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793
Mar 11 19:33:53.466: INFO: Pod name my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793: Found 0 pods out of 1
Mar 11 19:33:58.469: INFO: Pod name my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793: Found 1 pods out of 1
Mar 11 19:33:58.469: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793" are running
Mar 11 19:33:58.471: INFO: Pod "my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793-9gf2r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:33:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:33:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:33:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:33:53 +0000 UTC Reason: Message:}])
Mar 11 19:33:58.471: INFO: Trying to dial the pod
Mar 11 19:34:03.483: INFO: Controller my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793: Got expected result from replica 1 [my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793-9gf2r]: "my-hostname-basic-887aa092-f2ee-47f8-aa33-6a04f14ec793-9gf2r", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6345" for this suite.

• [SLOW TEST:10.156 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":138,"skipped":2292,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:03.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-7244
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7244.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7244.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7244.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7244.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 19:34:09.645: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.649: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.651: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.654: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.664: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.668: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.671: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.674: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7244.svc.cluster.local from pod dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d: the server could not find the requested resource (get pods dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d)
Mar 11 19:34:09.680: INFO: Lookups using dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7244.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7244.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7244.svc.cluster.local jessie_udp@dns-test-service-2.dns-7244.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7244.svc.cluster.local]

Mar 11 19:34:14.711: INFO: DNS probes using dns-7244/dns-test-9eac72e6-41ff-441b-8d8e-30dbb0dda41d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:14.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7244" for this suite.

• [SLOW TEST:11.241 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":139,"skipped":2301,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:14.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-4750
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:34:14.868: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea" in namespace "projected-4750" to be "Succeeded or Failed"
Mar 11 19:34:14.870: INFO: Pod "downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086167ms
Mar 11 19:34:16.875: INFO: Pod "downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006815178s
Mar 11 19:34:18.878: INFO: Pod "downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009681889s
STEP: Saw pod success
Mar 11 19:34:18.878: INFO: Pod "downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea" satisfied condition "Succeeded or Failed"
Mar 11 19:34:18.880: INFO: Trying to get logs from node node2 pod downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea container client-container: 
STEP: delete the pod
Mar 11 19:34:18.895: INFO: Waiting for pod downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea to disappear
Mar 11 19:34:18.897: INFO: Pod downwardapi-volume-1a27450b-cad5-4787-96e9-aa2bd07daeea no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:18.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4750" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2309,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:18.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1200
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-5e75897b-78f7-4ef5-a456-75182bdf7158
STEP: Creating a pod to test consume configMaps
Mar 11 19:34:19.043: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a" in namespace "projected-1200" to be "Succeeded or Failed"
Mar 11 19:34:19.046: INFO: Pod "pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660359ms
Mar 11 19:34:21.048: INFO: Pod "pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005195606s
Mar 11 19:34:23.053: INFO: Pod "pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009861575s
STEP: Saw pod success
Mar 11 19:34:23.053: INFO: Pod "pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a" satisfied condition "Succeeded or Failed"
Mar 11 19:34:23.056: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:34:23.068: INFO: Waiting for pod pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a to disappear
Mar 11 19:34:23.070: INFO: Pod pod-projected-configmaps-b6ec6782-eff6-48d3-9fc4-73e5b484361a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1200" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2313,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:23.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4729
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:34:23.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Mar 11 19:34:23.312: INFO: stderr: ""
Mar 11 19:34:23.312: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.16\", GitCommit:\"7a98bb2b7c9112935387825f2fce1b7d40b76236\", GitTreeState:\"clean\", BuildDate:\"2021-02-17T12:01:24Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T16:04:18Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:23.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4729" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":142,"skipped":2356,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:23.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-7408
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:34:23.446: INFO: Creating deployment "webserver-deployment"
Mar 11 19:34:23.450: INFO: Waiting for observed generation 1
Mar 11 19:34:25.457: INFO: Waiting for all required pods to come up
Mar 11 19:34:25.461: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Mar 11 19:34:37.469: INFO: Waiting for deployment "webserver-deployment" to complete
Mar 11 19:34:37.473: INFO: Updating deployment "webserver-deployment" with a non-existent image
Mar 11 19:34:37.479: INFO: Updating deployment webserver-deployment
Mar 11 19:34:37.479: INFO: Waiting for observed generation 2
Mar 11 19:34:39.485: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Mar 11 19:34:39.488: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Mar 11 19:34:39.490: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 11 19:34:39.498: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Mar 11 19:34:39.499: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Mar 11 19:34:39.502: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar 11 19:34:39.507: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Mar 11 19:34:39.507: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Mar 11 19:34:39.514: INFO: Updating deployment webserver-deployment
Mar 11 19:34:39.514: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Mar 11 19:34:39.518: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Mar 11 19:34:39.520: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Mar 11 19:34:39.526: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-7408 /apis/apps/v1/namespaces/deployment-7408/deployments/webserver-deployment f9a792b6-603e-497d-a3d7-e8ce7179485b 35407 3 2021-03-11 19:34:23 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037a2d18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2021-03-11 19:34:37 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-11 19:34:39 +0000 UTC,LastTransitionTime:2021-03-11 19:34:39 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Mar 11 19:34:39.529: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-7408 /apis/apps/v1/namespaces/deployment-7408/replicasets/webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 35405 3 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f9a792b6-603e-497d-a3d7-e8ce7179485b 0xc0037a31c7 0xc0037a31c8}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 57 97 55 57 50 98 54 45 54 48 51 101 45 52 57 55 100 45 97 51 100 55 45 101 56 99 101 55 49 55 57 52 56 53 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037a3248  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:34:39.530: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Mar 11 19:34:39.530: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-7408 /apis/apps/v1/namespaces/deployment-7408/replicasets/webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 35403 3 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f9a792b6-603e-497d-a3d7-e8ce7179485b 0xc0037a32a7 0xc0037a32a8}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 57 97 55 57 50 98 54 45 54 48 51 101 45 52 57 55 100 45 97 51 100 55 45 101 56 99 101 55 49 55 57 52 56 53 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037a3318  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:34:39.535: INFO: Pod "webserver-deployment-6676bcd6d4-bw8qk" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bw8qk webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-bw8qk b6bb8b07-58a3-4cc7-af1c-eece01bf3a14 35381 0 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385e27f 0xc00385e290}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-03-11 19:34:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.536: INFO: Pod "webserver-deployment-6676bcd6d4-cgnw7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cgnw7 webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-cgnw7 16bfd96d-6587-4e59-9e29-cc6efc987a3b 35387 0 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385e42f 0xc00385e440}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-03-11 19:34:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.536: INFO: Pod "webserver-deployment-6676bcd6d4-knwrd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-knwrd webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-knwrd e89c2353-f2ca-4b47-b777-455b459fae24 35412 0 2021-03-11 19:34:39 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385e5cf 0xc00385e5e0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.536: INFO: Pod "webserver-deployment-6676bcd6d4-lb56q" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lb56q webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-lb56q 6a513ea1-762b-4fca-a957-b02eeeb08d45 35371 0 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385e6df 0xc00385e6f0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-03-11 19:34:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.537: INFO: Pod "webserver-deployment-6676bcd6d4-p2p8j" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p2p8j webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-p2p8j d9572c40-ab61-467c-8085-32edd43a35a7 35395 0 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385e87f 0xc00385e890}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2021-03-11 19:34:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.537: INFO: Pod "webserver-deployment-6676bcd6d4-z88qq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z88qq webserver-deployment-6676bcd6d4- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-6676bcd6d4-z88qq b70d02d7-319e-400d-8975-6f4a03dd0e31 35380 0 2021-03-11 19:34:37 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 ca15ed99-fa9b-45a4-9c39-d325581f613e 0xc00385ea1f 0xc00385ea30}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 97 49 53 101 100 57 57 45 102 97 57 98 45 52 53 97 52 45 57 99 51 57 45 100 51 50 53 53 56 49 102 54 49 51 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2021-03-11 19:34:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.537: INFO: Pod "webserver-deployment-84855cf797-2f7kl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-2f7kl webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-2f7kl fd4e487b-e8a8-4d0c-8f41-925ba3cba566 35410 0 2021-03-11 19:34:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385ebbf 0xc00385ebd0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.538: INFO: Pod "webserver-deployment-84855cf797-6gnq9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6gnq9 webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-6gnq9 29362cd8-ad69-4d9e-8335-15ca35e6c324 35263 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.133"
    ],
    "mac": "6e:48:fd:d7:62:f8",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.133"
    ],
    "mac": "6e:48:fd:d7:62:f8",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385ecef 0xc00385ed00}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 49 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.133,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://27c6d14ece02fcb5a874715cb47772987317c7481934b18caa32b7c6babbba31,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.538: INFO: Pod "webserver-deployment-84855cf797-8pq99" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8pq99 webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-8pq99 e7a79cf3-1fbc-4ea0-a7b3-fa8018c1ddcc 35305 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.134"
    ],
    "mac": "12:a5:34:ed:4c:68",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.134"
    ],
    "mac": "12:a5:34:ed:4c:68",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385eeaf 0xc00385eec0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 49 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.134,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://479856e3c2b59cb653fcbe12edda0a4e50179e437b67ca8ac4dbe57bb80a248f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.539: INFO: Pod "webserver-deployment-84855cf797-hmwn2" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hmwn2 webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-hmwn2 7d4c617a-5343-4ba1-858a-5179a000b1b5 35320 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.128"
    ],
    "mac": "42:7a:32:a1:23:65",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.128"
    ],
    "mac": "42:7a:32:a1:23:65",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f06f 0xc00385f080}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 50 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.128,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6d038ad8ddf45a49d40f98fcd1537ddc68bf5d10118e9d3e3294a39be16489a0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.539: INFO: Pod "webserver-deployment-84855cf797-hpwws" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hpwws webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-hpwws ed65827a-8231-4dad-bc69-7a14aaa4b8e1 35257 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.125"
    ],
    "mac": "32:4c:bf:e3:73:d3",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.125"
    ],
    "mac": "32:4c:bf:e3:73:d3",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f22f 0xc00385f240}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:29 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 50 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.125,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6538be88ef49c089dca6abc4c348472b6948d2013669701303099dc7465baf4f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.540: INFO: Pod "webserver-deployment-84855cf797-l6xzs" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-l6xzs webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-l6xzs 7ee10f01-fa00-4674-986f-70fe861be63f 35308 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.127"
    ],
    "mac": "0a:41:2f:b2:49:3e",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.127"
    ],
    "mac": "0a:41:2f:b2:49:3e",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f3ef 0xc00385f400}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 50 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.127,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a2f64b2916371a2f286172e5934a49fc824a9805b32b0abc5e982f97d98d693c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.540: INFO: Pod "webserver-deployment-84855cf797-r2wnr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-r2wnr webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-r2wnr cafe50f4-8de7-4c59-9479-914997c75f32 35244 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.131"
    ],
    "mac": "3e:34:4f:56:6f:14",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.131"
    ],
    "mac": "3e:34:4f:56:6f:14",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f5af 0xc00385f5c0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:28 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 49 51 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.131,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://75ccc54a5f755faaa3308aed6789d14b292235eab7bbc15ae7da6b3d4249ba65,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.540: INFO: Pod "webserver-deployment-84855cf797-rqmjm" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rqmjm webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-rqmjm 8645d182-50d0-47ba-b7ea-f451d6c6e99d 35289 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.132"
    ],
    "mac": "1a:76:ce:7e:3c:89",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.132"
    ],
    "mac": "1a:76:ce:7e:3c:89",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f78f 0xc00385f7a0}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 49 51 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.132,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6326f5c62c030c74af75ab293f0843a0aa37fd942c66b27fb8086d2f31b4055f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 11 19:34:39.541: INFO: Pod "webserver-deployment-84855cf797-sl7z8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-sl7z8 webserver-deployment-84855cf797- deployment-7408 /api/v1/namespaces/deployment-7408/pods/webserver-deployment-84855cf797-sl7z8 d6a203f5-45b3-4c8a-ba19-f55f91209f0c 35277 0 2021-03-11 19:34:23 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.126"
    ],
    "mac": "3e:b2:51:26:81:e7",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.126"
    ],
    "mac": "3e:b2:51:26:81:e7",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 d39564f0-a84f-48e6-a0e1-9d0c0f1bd700 0xc00385f96f 0xc00385f980}] []  [{kube-controller-manager Update v1 2021-03-11 19:34:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 51 57 53 54 52 102 48 45 97 56 52 102 45 52 56 101 54 45 97 48 101 49 45 57 100 48 99 48 102 49 98 100 55 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:34:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:34:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcgb2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcgb2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcgb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:34:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.126,StartTime:2021-03-11 19:34:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:34:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://72233a8a282522ff70eb8e40a8a877f69ecaccb7a1ec420d4d63aec5cec3af5c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:39.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7408" for this suite.

• [SLOW TEST:16.231 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":143,"skipped":2404,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:39.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4424
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 11 19:34:39.689: INFO: Waiting up to 5m0s for pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4" in namespace "downward-api-4424" to be "Succeeded or Failed"
Mar 11 19:34:39.691: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661042ms
Mar 11 19:34:41.695: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006612726s
Mar 11 19:34:43.698: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009649549s
Mar 11 19:34:45.703: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013865447s
Mar 11 19:34:47.706: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017243132s
Mar 11 19:34:49.710: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02083268s
Mar 11 19:34:51.714: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024861199s
Mar 11 19:34:53.718: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.029447607s
STEP: Saw pod success
Mar 11 19:34:53.718: INFO: Pod "downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4" satisfied condition "Succeeded or Failed"
Mar 11 19:34:53.721: INFO: Trying to get logs from node node2 pod downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4 container dapi-container: 
STEP: delete the pod
Mar 11 19:34:53.734: INFO: Waiting for pod downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4 to disappear
Mar 11 19:34:53.736: INFO: Pod downward-api-2c66ebce-4e2c-4d1f-92a0-fce1d70cf3a4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:53.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4424" for this suite.

• [SLOW TEST:14.190 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2421,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:53.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8670
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Mar 11 19:34:53.867: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix313257121/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:34:53.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8670" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":145,"skipped":2426,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:34:53.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-800
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:34:54.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:34:56.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088094, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088094, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088094, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088094, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:34:59.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:35:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-800" for this suite.
STEP: Destroying namespace "webhook-800-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.457 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":146,"skipped":2434,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:35:11.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-9570
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:35:11.553: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Mar 11 19:35:11.561: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:11.561: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:11.561: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:11.563: INFO: Number of nodes with available pods: 0
Mar 11 19:35:11.563: INFO: Node node1 is running more than one daemon pod
Mar 11 19:35:12.569: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:12.569: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:12.569: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:12.572: INFO: Number of nodes with available pods: 0
Mar 11 19:35:12.572: INFO: Node node1 is running more than one daemon pod
Mar 11 19:35:13.568: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:13.568: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:13.568: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:13.571: INFO: Number of nodes with available pods: 0
Mar 11 19:35:13.571: INFO: Node node1 is running more than one daemon pod
Mar 11 19:35:14.568: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:14.568: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:14.568: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:14.570: INFO: Number of nodes with available pods: 0
Mar 11 19:35:14.571: INFO: Node node1 is running more than one daemon pod
Mar 11 19:35:15.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:15.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:15.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:15.572: INFO: Number of nodes with available pods: 1
Mar 11 19:35:15.573: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:16.569: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:16.569: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:16.569: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:16.571: INFO: Number of nodes with available pods: 1
Mar 11 19:35:16.571: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:17.569: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:17.569: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:17.569: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:17.572: INFO: Number of nodes with available pods: 2
Mar 11 19:35:17.572: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Mar 11 19:35:17.597: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:17.597: INFO: Wrong image for pod: daemon-set-hvqgg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:17.601: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:17.601: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:17.601: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:18.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:18.605: INFO: Wrong image for pod: daemon-set-hvqgg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:18.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:18.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:18.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:19.607: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:19.607: INFO: Wrong image for pod: daemon-set-hvqgg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:19.607: INFO: Pod daemon-set-hvqgg is not available
Mar 11 19:35:19.611: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:19.611: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:19.611: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:20.607: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:20.607: INFO: Wrong image for pod: daemon-set-hvqgg. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:20.607: INFO: Pod daemon-set-hvqgg is not available
Mar 11 19:35:20.611: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:20.611: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:20.611: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:21.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:21.606: INFO: Pod daemon-set-qzsld is not available
Mar 11 19:35:21.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:21.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:21.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:22.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:22.605: INFO: Pod daemon-set-qzsld is not available
Mar 11 19:35:22.608: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:22.608: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:22.608: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:23.607: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:23.607: INFO: Pod daemon-set-qzsld is not available
Mar 11 19:35:23.612: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:23.612: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:23.612: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:24.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:24.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:24.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:24.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:25.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:25.606: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:25.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:25.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:25.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:26.607: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:26.607: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:26.611: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:26.611: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:26.611: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:27.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:27.605: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:27.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:27.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:27.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:28.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:28.606: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:28.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:28.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:28.611: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:29.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:29.606: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:29.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:29.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:29.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:30.608: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:30.608: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:30.612: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:30.612: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:30.612: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:31.608: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:31.608: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:31.613: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:31.613: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:31.613: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:32.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:32.605: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:32.609: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:32.609: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:32.609: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:33.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:33.605: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:33.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:33.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:33.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:34.606: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:34.606: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:34.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:34.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:34.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:35.605: INFO: Wrong image for pod: daemon-set-gzftr. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Mar 11 19:35:35.605: INFO: Pod daemon-set-gzftr is not available
Mar 11 19:35:35.609: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:35.609: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:35.609: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.605: INFO: Pod daemon-set-kjsl5 is not available
Mar 11 19:35:36.610: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.610: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.610: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Mar 11 19:35:36.614: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.615: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.615: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:36.617: INFO: Number of nodes with available pods: 1
Mar 11 19:35:36.617: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:37.623: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:37.623: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:37.623: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:37.626: INFO: Number of nodes with available pods: 1
Mar 11 19:35:37.626: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:38.624: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:38.624: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:38.624: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:38.627: INFO: Number of nodes with available pods: 1
Mar 11 19:35:38.627: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:39.622: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:39.623: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:39.623: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:39.625: INFO: Number of nodes with available pods: 1
Mar 11 19:35:39.625: INFO: Node node2 is running more than one daemon pod
Mar 11 19:35:40.622: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:40.622: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:40.622: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:35:40.625: INFO: Number of nodes with available pods: 2
Mar 11 19:35:40.625: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9570, will wait for the garbage collector to delete the pods
Mar 11 19:35:40.696: INFO: Deleting DaemonSet.extensions daemon-set took: 4.511789ms
Mar 11 19:35:41.297: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.195129ms
Mar 11 19:35:46.500: INFO: Number of nodes with available pods: 0
Mar 11 19:35:46.500: INFO: Number of running nodes: 0, number of available pods: 0
Mar 11 19:35:46.502: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9570/daemonsets","resourceVersion":"36119"},"items":null}

Mar 11 19:35:46.504: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9570/pods","resourceVersion":"36119"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:35:46.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9570" for this suite.

• [SLOW TEST:35.114 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":147,"skipped":2448,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:35:46.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-8970
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:35:46.646: INFO: Creating ReplicaSet my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489
Mar 11 19:35:46.652: INFO: Pod name my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489: Found 0 pods out of 1
Mar 11 19:35:51.655: INFO: Pod name my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489: Found 1 pods out of 1
Mar 11 19:35:51.655: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489" is running
Mar 11 19:35:51.657: INFO: Pod "my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489-cjsv8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:35:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:35:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:35:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-11 19:35:46 +0000 UTC Reason: Message:}])
Mar 11 19:35:51.657: INFO: Trying to dial the pod
Mar 11 19:35:56.670: INFO: Controller my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489: Got expected result from replica 1 [my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489-cjsv8]: "my-hostname-basic-197f1759-9439-4d43-b4dc-b927cbc20489-cjsv8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:35:56.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8970" for this suite.

• [SLOW TEST:10.157 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":148,"skipped":2451,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:35:56.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9181
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 11 19:35:56.817: INFO: Waiting up to 5m0s for pod "pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1" in namespace "emptydir-9181" to be "Succeeded or Failed"
Mar 11 19:35:56.819: INFO: Pod "pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539743ms
Mar 11 19:35:58.822: INFO: Pod "pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005906109s
Mar 11 19:36:00.830: INFO: Pod "pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013906324s
STEP: Saw pod success
Mar 11 19:36:00.830: INFO: Pod "pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1" satisfied condition "Succeeded or Failed"
Mar 11 19:36:00.833: INFO: Trying to get logs from node node1 pod pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1 container test-container: 
STEP: delete the pod
Mar 11 19:36:00.855: INFO: Waiting for pod pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1 to disappear
Mar 11 19:36:00.857: INFO: Pod pod-6ab44dc5-e1e0-4f53-a59c-f4b2c39b7ea1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:36:00.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9181" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2470,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:36:00.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-4386
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:36:00.992: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Mar 11 19:36:00.998: INFO: Pod name sample-pod: Found 0 pods out of 1
Mar 11 19:36:06.002: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 11 19:36:06.002: INFO: Creating deployment "test-rolling-update-deployment"
Mar 11 19:36:06.005: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Mar 11 19:36:06.009: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Mar 11 19:36:08.018: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Mar 11 19:36:08.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088166, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088166, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088166, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088166, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:36:10.024: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Mar 11 19:36:10.032: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4386 /apis/apps/v1/namespaces/deployment-4386/deployments/test-rolling-update-deployment 2785c2bb-1545-4b3a-b2d6-1e5fccc32433 36353 1 2021-03-11 19:36:06 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2021-03-11 19:36:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:36:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0013c6d18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-11 19:36:06 +0000 UTC,LastTransitionTime:2021-03-11 19:36:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2021-03-11 19:36:08 +0000 UTC,LastTransitionTime:2021-03-11 19:36:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar 11 19:36:10.035: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-4386 /apis/apps/v1/namespaces/deployment-4386/replicasets/test-rolling-update-deployment-59d5cb45c7 8ebfb893-8e5d-49ad-9579-ea9cb54da934 36342 1 2021-03-11 19:36:06 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2785c2bb-1545-4b3a-b2d6-1e5fccc32433 0xc0062f2247 0xc0062f2248}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:36:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 56 53 99 50 98 98 45 49 53 52 53 45 52 98 51 97 45 98 50 100 54 45 49 101 53 102 99 99 99 51 50 52 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0062f22d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:36:10.035: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Mar 11 19:36:10.035: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4386 /apis/apps/v1/namespaces/deployment-4386/replicasets/test-rolling-update-controller 3f877043-335d-4f8c-ad41-1330acd19fe6 36350 2 2021-03-11 19:36:00 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2785c2bb-1545-4b3a-b2d6-1e5fccc32433 0xc0062f2137 0xc0062f2138}] []  [{e2e.test Update apps/v1 2021-03-11 19:36:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:36:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 56 53 99 50 98 98 45 49 53 52 53 45 52 98 51 97 45 98 50 100 54 45 49 101 53 102 99 99 99 51 50 52 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0062f21d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:36:10.038: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-k7b9p" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-k7b9p test-rolling-update-deployment-59d5cb45c7- deployment-4386 /api/v1/namespaces/deployment-4386/pods/test-rolling-update-deployment-59d5cb45c7-k7b9p 0cf9f61d-53ea-46f2-8159-c7af6f1a5e1a 36341 0 2021-03-11 19:36:06 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.139"
    ],
    "mac": "22:7e:87:1c:50:0b",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.139"
    ],
    "mac": "22:7e:87:1c:50:0b",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 8ebfb893-8e5d-49ad-9579-ea9cb54da934 0xc0062f27df 0xc0062f27f0}] []  [{kube-controller-manager Update v1 2021-03-11 19:36:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 101 98 102 98 56 57 51 45 56 101 53 100 45 52 57 97 100 45 57 53 55 57 45 101 97 57 99 98 53 52 100 97 57 51 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:36:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:36:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 51 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-54m4r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-54m4r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-54m4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:36:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:36:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.139,StartTime:2021-03-11 19:36:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:36:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://3ed18981010a74b14df72719ec82a1a593dbd57db0c5f6d3bd984e589bbba201,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:36:10.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4386" for this suite.

• [SLOW TEST:9.180 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":150,"skipped":2505,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:36:10.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-5805
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-5805
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5805 to expose endpoints map[]
Mar 11 19:36:10.181: INFO: successfully validated that service multi-endpoint-test in namespace services-5805 exposes endpoints map[] (2.462893ms elapsed)
STEP: Creating pod pod1 in namespace services-5805
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5805 to expose endpoints map[pod1:[100]]
Mar 11 19:36:13.220: INFO: successfully validated that service multi-endpoint-test in namespace services-5805 exposes endpoints map[pod1:[100]] (3.028273129s elapsed)
STEP: Creating pod pod2 in namespace services-5805
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5805 to expose endpoints map[pod1:[100] pod2:[101]]
Mar 11 19:36:16.267: INFO: successfully validated that service multi-endpoint-test in namespace services-5805 exposes endpoints map[pod1:[100] pod2:[101]] (3.035983535s elapsed)
STEP: Deleting pod pod1 in namespace services-5805
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5805 to expose endpoints map[pod2:[101]]
Mar 11 19:36:17.281: INFO: successfully validated that service multi-endpoint-test in namespace services-5805 exposes endpoints map[pod2:[101]] (1.00984537s elapsed)
STEP: Deleting pod pod2 in namespace services-5805
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5805 to expose endpoints map[]
Mar 11 19:36:18.294: INFO: successfully validated that service multi-endpoint-test in namespace services-5805 exposes endpoints map[] (1.008228753s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:36:18.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5805" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:8.268 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":151,"skipped":2509,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:36:18.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-7490
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7490
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Mar 11 19:36:18.448: INFO: Found 0 stateful pods, waiting for 3
Mar 11 19:36:28.452: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:36:28.452: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:36:28.452: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 11 19:36:38.452: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:36:38.452: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:36:38.452: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 11 19:36:38.476: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Mar 11 19:36:48.507: INFO: Updating stateful set ss2
Mar 11 19:36:48.512: INFO: Waiting for Pod statefulset-7490/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Mar 11 19:36:58.538: INFO: Found 1 stateful pods, waiting for 3
Mar 11 19:37:08.551: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:37:08.551: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:37:08.551: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Mar 11 19:37:18.543: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:37:18.543: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:37:18.543: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Mar 11 19:37:18.564: INFO: Updating stateful set ss2
Mar 11 19:37:18.569: INFO: Waiting for Pod statefulset-7490/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:37:28.592: INFO: Updating stateful set ss2
Mar 11 19:37:28.596: INFO: Waiting for StatefulSet statefulset-7490/ss2 to complete update
Mar 11 19:37:28.596: INFO: Waiting for Pod statefulset-7490/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:37:38.607: INFO: Waiting for StatefulSet statefulset-7490/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 11 19:37:48.602: INFO: Deleting all statefulset in ns statefulset-7490
Mar 11 19:37:48.604: INFO: Scaling statefulset ss2 to 0
Mar 11 19:38:08.618: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:38:08.621: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:38:08.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7490" for this suite.

• [SLOW TEST:110.324 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":152,"skipped":2512,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:38:08.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-9519
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:38:08.774: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b24be0ee-ec8b-4cb6-a55a-3159a5d6c863" in namespace "security-context-test-9519" to be "Succeeded or Failed"
Mar 11 19:38:08.776: INFO: Pod "busybox-user-65534-b24be0ee-ec8b-4cb6-a55a-3159a5d6c863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648441ms
Mar 11 19:38:10.780: INFO: Pod "busybox-user-65534-b24be0ee-ec8b-4cb6-a55a-3159a5d6c863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006302501s
Mar 11 19:38:12.783: INFO: Pod "busybox-user-65534-b24be0ee-ec8b-4cb6-a55a-3159a5d6c863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009502212s
Mar 11 19:38:12.783: INFO: Pod "busybox-user-65534-b24be0ee-ec8b-4cb6-a55a-3159a5d6c863" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:38:12.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9519" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2541,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:38:12.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6994
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:38:12.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1" in namespace "downward-api-6994" to be "Succeeded or Failed"
Mar 11 19:38:12.926: INFO: Pod "downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.573102ms
Mar 11 19:38:14.933: INFO: Pod "downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009324423s
Mar 11 19:38:16.938: INFO: Pod "downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01423264s
STEP: Saw pod success
Mar 11 19:38:16.938: INFO: Pod "downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1" satisfied condition "Succeeded or Failed"
Mar 11 19:38:16.941: INFO: Trying to get logs from node node2 pod downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1 container client-container: 
STEP: delete the pod
Mar 11 19:38:16.960: INFO: Waiting for pod downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1 to disappear
Mar 11 19:38:16.962: INFO: Pod downwardapi-volume-98fb8ec8-b35a-4347-a1bf-b05f05de69c1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:38:16.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6994" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2577,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:38:16.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-6698
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:38:17.102: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7291
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Mar 11 19:38:17.294: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7291 /api/v1/namespaces/watch-7291/configmaps/e2e-watch-test-watch-closed 0fd2416d-2f1f-4d2d-89f5-10c6b13440ac 37316 0 2021-03-11 19:38:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-11 19:38:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:38:17.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7291 /api/v1/namespaces/watch-7291/configmaps/e2e-watch-test-watch-closed 0fd2416d-2f1f-4d2d-89f5-10c6b13440ac 37317 0 2021-03-11 19:38:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-11 19:38:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Mar 11 19:38:17.304: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7291 /api/v1/namespaces/watch-7291/configmaps/e2e-watch-test-watch-closed 0fd2416d-2f1f-4d2d-89f5-10c6b13440ac 37318 0 2021-03-11 19:38:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-11 19:38:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:38:17.304: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7291 /api/v1/namespaces/watch-7291/configmaps/e2e-watch-test-watch-closed 0fd2416d-2f1f-4d2d-89f5-10c6b13440ac 37319 0 2021-03-11 19:38:17 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-11 19:38:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:38:17.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7291" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":156,"skipped":2590,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:38:17.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7382
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Mar 11 19:38:17.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7382'
Mar 11 19:38:17.787: INFO: stderr: ""
Mar 11 19:38:17.787: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 11 19:38:17.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:17.953: INFO: stderr: ""
Mar 11 19:38:17.954: INFO: stdout: "update-demo-nautilus-95zlc update-demo-nautilus-zdhlh "
Mar 11 19:38:17.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95zlc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:18.117: INFO: stderr: ""
Mar 11 19:38:18.117: INFO: stdout: ""
Mar 11 19:38:18.117: INFO: update-demo-nautilus-95zlc is created but not running
Mar 11 19:38:23.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:23.267: INFO: stderr: ""
Mar 11 19:38:23.267: INFO: stdout: "update-demo-nautilus-95zlc update-demo-nautilus-zdhlh "
Mar 11 19:38:23.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95zlc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:23.430: INFO: stderr: ""
Mar 11 19:38:23.430: INFO: stdout: "true"
Mar 11 19:38:23.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-95zlc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:23.576: INFO: stderr: ""
Mar 11 19:38:23.576: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:38:23.576: INFO: validating pod update-demo-nautilus-95zlc
Mar 11 19:38:23.581: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:38:23.581: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:38:23.581: INFO: update-demo-nautilus-95zlc is verified up and running
Mar 11 19:38:23.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:23.738: INFO: stderr: ""
Mar 11 19:38:23.738: INFO: stdout: "true"
Mar 11 19:38:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:23.897: INFO: stderr: ""
Mar 11 19:38:23.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:38:23.897: INFO: validating pod update-demo-nautilus-zdhlh
Mar 11 19:38:23.901: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:38:23.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:38:23.901: INFO: update-demo-nautilus-zdhlh is verified up and running
STEP: scaling down the replication controller
Mar 11 19:38:23.910: INFO: scanned /root for discovery docs: 
Mar 11 19:38:23.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7382'
Mar 11 19:38:24.094: INFO: stderr: ""
Mar 11 19:38:24.094: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 11 19:38:24.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:24.244: INFO: stderr: ""
Mar 11 19:38:24.244: INFO: stdout: "update-demo-nautilus-95zlc update-demo-nautilus-zdhlh "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 11 19:38:29.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:29.422: INFO: stderr: ""
Mar 11 19:38:29.422: INFO: stdout: "update-demo-nautilus-95zlc update-demo-nautilus-zdhlh "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 11 19:38:34.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:34.587: INFO: stderr: ""
Mar 11 19:38:34.587: INFO: stdout: "update-demo-nautilus-95zlc update-demo-nautilus-zdhlh "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar 11 19:38:39.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:39.749: INFO: stderr: ""
Mar 11 19:38:39.749: INFO: stdout: "update-demo-nautilus-zdhlh "
Mar 11 19:38:39.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:39.887: INFO: stderr: ""
Mar 11 19:38:39.887: INFO: stdout: "true"
Mar 11 19:38:39.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:40.036: INFO: stderr: ""
Mar 11 19:38:40.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:38:40.036: INFO: validating pod update-demo-nautilus-zdhlh
Mar 11 19:38:40.039: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:38:40.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:38:40.039: INFO: update-demo-nautilus-zdhlh is verified up and running
STEP: scaling up the replication controller
Mar 11 19:38:40.047: INFO: scanned /root for discovery docs: 
Mar 11 19:38:40.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7382'
Mar 11 19:38:40.247: INFO: stderr: ""
Mar 11 19:38:40.247: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 11 19:38:40.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:40.412: INFO: stderr: ""
Mar 11 19:38:40.412: INFO: stdout: "update-demo-nautilus-l7rkd update-demo-nautilus-zdhlh "
Mar 11 19:38:40.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:40.556: INFO: stderr: ""
Mar 11 19:38:40.556: INFO: stdout: ""
Mar 11 19:38:40.556: INFO: update-demo-nautilus-l7rkd is created but not running
Mar 11 19:38:45.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7382'
Mar 11 19:38:45.710: INFO: stderr: ""
Mar 11 19:38:45.710: INFO: stdout: "update-demo-nautilus-l7rkd update-demo-nautilus-zdhlh "
Mar 11 19:38:45.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:45.859: INFO: stderr: ""
Mar 11 19:38:45.859: INFO: stdout: "true"
Mar 11 19:38:45.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l7rkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:46.007: INFO: stderr: ""
Mar 11 19:38:46.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:38:46.008: INFO: validating pod update-demo-nautilus-l7rkd
Mar 11 19:38:46.012: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:38:46.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:38:46.012: INFO: update-demo-nautilus-l7rkd is verified up and running
Mar 11 19:38:46.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:46.172: INFO: stderr: ""
Mar 11 19:38:46.172: INFO: stdout: "true"
Mar 11 19:38:46.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zdhlh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7382'
Mar 11 19:38:46.323: INFO: stderr: ""
Mar 11 19:38:46.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 11 19:38:46.323: INFO: validating pod update-demo-nautilus-zdhlh
Mar 11 19:38:46.325: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar 11 19:38:46.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 11 19:38:46.326: INFO: update-demo-nautilus-zdhlh is verified up and running
STEP: using delete to clean up resources
Mar 11 19:38:46.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7382'
Mar 11 19:38:46.448: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:38:46.448: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 11 19:38:46.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7382'
Mar 11 19:38:46.636: INFO: stderr: "No resources found in kubectl-7382 namespace.\n"
Mar 11 19:38:46.636: INFO: stdout: ""
Mar 11 19:38:46.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7382 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 11 19:38:46.803: INFO: stderr: ""
Mar 11 19:38:46.804: INFO: stdout: "update-demo-nautilus-l7rkd\nupdate-demo-nautilus-zdhlh\n"
Mar 11 19:38:47.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7382'
Mar 11 19:38:47.497: INFO: stderr: "No resources found in kubectl-7382 namespace.\n"
Mar 11 19:38:47.497: INFO: stdout: ""
Mar 11 19:38:47.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7382 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 11 19:38:47.667: INFO: stderr: ""
Mar 11 19:38:47.667: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:38:47.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7382" for this suite.

• [SLOW TEST:30.363 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":157,"skipped":2614,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:38:47.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2246
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0311 19:39:27.829852      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:39:27.829: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:39:27.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2246" for this suite.

• [SLOW TEST:40.163 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":158,"skipped":2651,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:39:27.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-7252
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:39:27.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:39:35.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7252" for this suite.

• [SLOW TEST:7.708 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2674,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:39:35.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-8837
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:39:35.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:39:37.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088375, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088375, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088375, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088375, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:39:40.971: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:39:40.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9058-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:39:47.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8837" for this suite.
STEP: Destroying namespace "webhook-8837-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.502 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":160,"skipped":2705,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:39:47.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7893
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Mar 11 19:39:51.728: INFO: Successfully updated pod "annotationupdatee53ebb95-cfdb-498c-9ae9-dcd814f57146"
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:39:55.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7893" for this suite.

• [SLOW TEST:8.708 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2715,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:39:55.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8922
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Mar 11 19:39:55.883: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:40:07.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8922" for this suite.

• [SLOW TEST:11.713 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":162,"skipped":2726,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:40:07.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3510
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:40:15.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3510" for this suite.

• [SLOW TEST:8.158 seconds]
[k8s.io] Kubelet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2726,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:40:15.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-2968
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:40:15.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Mar 11 19:40:21.301: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:21Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:21Z]] name:name1 resourceVersion:38372 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eb4ff08d-0710-4670-a6b9-e925f3ecba89] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Mar 11 19:40:31.310: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:31Z]] name:name2 resourceVersion:38418 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4dbf7460-4e0c-4320-9bf6-1913abc43b40] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Mar 11 19:40:41.315: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:41Z]] name:name1 resourceVersion:38460 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eb4ff08d-0710-4670-a6b9-e925f3ecba89] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Mar 11 19:40:51.322: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:51Z]] name:name2 resourceVersion:38494 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4dbf7460-4e0c-4320-9bf6-1913abc43b40] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Mar 11 19:41:01.330: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:41Z]] name:name1 resourceVersion:38528 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:eb4ff08d-0710-4670-a6b9-e925f3ecba89] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Mar 11 19:41:11.343: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-11T19:40:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-11T19:40:51Z]] name:name2 resourceVersion:38562 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4dbf7460-4e0c-4320-9bf6-1913abc43b40] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:41:21.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2968" for this suite.

• [SLOW TEST:66.229 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":164,"skipped":2732,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:41:21.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6682
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:41:21.996: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586" in namespace "downward-api-6682" to be "Succeeded or Failed"
Mar 11 19:41:21.998: INFO: Pod "downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586": Phase="Pending", Reason="", readiness=false. Elapsed: 1.912312ms
Mar 11 19:41:24.000: INFO: Pod "downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004648364s
Mar 11 19:41:26.005: INFO: Pod "downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009251438s
STEP: Saw pod success
Mar 11 19:41:26.005: INFO: Pod "downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586" satisfied condition "Succeeded or Failed"
Mar 11 19:41:26.008: INFO: Trying to get logs from node node1 pod downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586 container client-container: 
STEP: delete the pod
Mar 11 19:41:26.032: INFO: Waiting for pod downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586 to disappear
Mar 11 19:41:26.034: INFO: Pod downwardapi-volume-9954b3b0-1618-455f-94f2-3ade87b90586 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:41:26.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6682" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2806,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:41:26.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7222
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 11 19:41:26.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7222'
Mar 11 19:41:26.303: INFO: stderr: ""
Mar 11 19:41:26.303: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Mar 11 19:41:26.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7222'
Mar 11 19:41:31.211: INFO: stderr: ""
Mar 11 19:41:31.211: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:41:31.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7222" for this suite.

• [SLOW TEST:5.176 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":166,"skipped":2824,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:41:31.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7497
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Mar 11 19:41:37.858: INFO: Successfully updated pod "adopt-release-sbt5p"
STEP: Checking that the Job readopts the Pod
Mar 11 19:41:37.858: INFO: Waiting up to 15m0s for pod "adopt-release-sbt5p" in namespace "job-7497" to be "adopted"
Mar 11 19:41:37.860: INFO: Pod "adopt-release-sbt5p": Phase="Running", Reason="", readiness=true. Elapsed: 1.796993ms
Mar 11 19:41:39.863: INFO: Pod "adopt-release-sbt5p": Phase="Running", Reason="", readiness=true. Elapsed: 2.005226706s
Mar 11 19:41:39.863: INFO: Pod "adopt-release-sbt5p" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Mar 11 19:41:40.374: INFO: Successfully updated pod "adopt-release-sbt5p"
STEP: Checking that the Job releases the Pod
Mar 11 19:41:40.375: INFO: Waiting up to 15m0s for pod "adopt-release-sbt5p" in namespace "job-7497" to be "released"
Mar 11 19:41:40.376: INFO: Pod "adopt-release-sbt5p": Phase="Running", Reason="", readiness=true. Elapsed: 1.951434ms
Mar 11 19:41:42.381: INFO: Pod "adopt-release-sbt5p": Phase="Running", Reason="", readiness=true. Elapsed: 2.00630358s
Mar 11 19:41:42.381: INFO: Pod "adopt-release-sbt5p" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:41:42.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7497" for this suite.

• [SLOW TEST:11.170 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":167,"skipped":2824,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:41:42.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9647
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:41:42.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9647" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":168,"skipped":2840,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:41:42.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-2314
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2314
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Mar 11 19:41:42.676: INFO: Found 0 stateful pods, waiting for 3
Mar 11 19:41:52.688: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:41:52.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:41:52.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 11 19:42:02.680: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:42:02.680: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:42:02.680: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:42:02.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2314 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:42:02.948: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:42:02.948: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:42:02.948: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 11 19:42:12.981: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Mar 11 19:42:22.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2314 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 19:42:23.256: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 19:42:23.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 19:42:23.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 19:42:33.276: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:42:33.276: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:42:33.276: INFO: Waiting for Pod statefulset-2314/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:42:33.276: INFO: Waiting for Pod statefulset-2314/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:42:43.288: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:42:43.288: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:42:43.288: INFO: Waiting for Pod statefulset-2314/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 11 19:42:53.281: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:42:53.281: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Mar 11 19:43:03.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2314 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:43:03.546: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:43:03.546: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:43:03.546: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 19:43:13.577: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Mar 11 19:43:23.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2314 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 19:43:23.843: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 19:43:23.843: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 19:43:23.843: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 19:43:33.860: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:43:33.860: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 11 19:43:33.860: INFO: Waiting for Pod statefulset-2314/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 11 19:43:33.860: INFO: Waiting for Pod statefulset-2314/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 11 19:43:43.867: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:43:43.867: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 11 19:43:53.866: INFO: Waiting for StatefulSet statefulset-2314/ss2 to complete update
Mar 11 19:43:53.866: INFO: Waiting for Pod statefulset-2314/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 11 19:44:03.868: INFO: Deleting all statefulset in ns statefulset-2314
Mar 11 19:44:03.870: INFO: Scaling statefulset ss2 to 0
Mar 11 19:44:13.883: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:44:13.885: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:44:13.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2314" for this suite.

• [SLOW TEST:151.356 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":169,"skipped":2858,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:44:13.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2732
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-01e80598-53f9-4b6a-8fd2-133a46ea42e1
STEP: Creating a pod to test consume configMaps
Mar 11 19:44:14.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0" in namespace "configmap-2732" to be "Succeeded or Failed"
Mar 11 19:44:14.044: INFO: Pod "pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329116ms
Mar 11 19:44:16.048: INFO: Pod "pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006028705s
Mar 11 19:44:18.054: INFO: Pod "pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0126653s
STEP: Saw pod success
Mar 11 19:44:18.054: INFO: Pod "pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0" satisfied condition "Succeeded or Failed"
Mar 11 19:44:18.057: INFO: Trying to get logs from node node2 pod pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0 container configmap-volume-test: 
STEP: delete the pod
Mar 11 19:44:18.081: INFO: Waiting for pod pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0 to disappear
Mar 11 19:44:18.083: INFO: Pod pod-configmaps-6ebf3452-c732-4a7c-bc14-a642b0cab4e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:44:18.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2732" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2910,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:44:18.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1304
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:44:18.628: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:44:20.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088658, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088658, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088658, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751088658, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:44:23.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:44:23.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1304" for this suite.
STEP: Destroying namespace "webhook-1304-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.641 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":171,"skipped":2911,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:44:23.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-9877
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:44:23.877: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Mar 11 19:44:23.882: INFO: Number of nodes with available pods: 0
Mar 11 19:44:23.882: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Mar 11 19:44:23.896: INFO: Number of nodes with available pods: 0
Mar 11 19:44:23.896: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:24.900: INFO: Number of nodes with available pods: 0
Mar 11 19:44:24.900: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:25.900: INFO: Number of nodes with available pods: 0
Mar 11 19:44:25.900: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:26.899: INFO: Number of nodes with available pods: 0
Mar 11 19:44:26.899: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:27.900: INFO: Number of nodes with available pods: 1
Mar 11 19:44:27.900: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Mar 11 19:44:27.914: INFO: Number of nodes with available pods: 1
Mar 11 19:44:27.914: INFO: Number of running nodes: 0, number of available pods: 1
Mar 11 19:44:28.919: INFO: Number of nodes with available pods: 0
Mar 11 19:44:28.920: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Mar 11 19:44:28.927: INFO: Number of nodes with available pods: 0
Mar 11 19:44:28.927: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:29.931: INFO: Number of nodes with available pods: 0
Mar 11 19:44:29.931: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:30.933: INFO: Number of nodes with available pods: 0
Mar 11 19:44:30.933: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:31.931: INFO: Number of nodes with available pods: 0
Mar 11 19:44:31.931: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:32.932: INFO: Number of nodes with available pods: 0
Mar 11 19:44:32.932: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:33.931: INFO: Number of nodes with available pods: 0
Mar 11 19:44:33.931: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:34.932: INFO: Number of nodes with available pods: 0
Mar 11 19:44:34.932: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:35.932: INFO: Number of nodes with available pods: 0
Mar 11 19:44:35.932: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:36.933: INFO: Number of nodes with available pods: 0
Mar 11 19:44:36.933: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:37.930: INFO: Number of nodes with available pods: 0
Mar 11 19:44:37.930: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:38.929: INFO: Number of nodes with available pods: 0
Mar 11 19:44:38.929: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:39.931: INFO: Number of nodes with available pods: 0
Mar 11 19:44:39.931: INFO: Node node1 is running more than one daemon pod
Mar 11 19:44:40.930: INFO: Number of nodes with available pods: 1
Mar 11 19:44:40.930: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9877, will wait for the garbage collector to delete the pods
Mar 11 19:44:40.995: INFO: Deleting DaemonSet.extensions daemon-set took: 5.89961ms
Mar 11 19:44:41.095: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.463995ms
Mar 11 19:44:46.498: INFO: Number of nodes with available pods: 0
Mar 11 19:44:46.498: INFO: Number of running nodes: 0, number of available pods: 0
Mar 11 19:44:46.502: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9877/daemonsets","resourceVersion":"40127"},"items":null}

Mar 11 19:44:46.505: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9877/pods","resourceVersion":"40127"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:44:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9877" for this suite.

• [SLOW TEST:22.798 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":172,"skipped":2911,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:44:46.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7284
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:44:46.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 11 19:44:54.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7284 create -f -'
Mar 11 19:44:54.946: INFO: stderr: ""
Mar 11 19:44:54.946: INFO: stdout: "e2e-test-crd-publish-openapi-4946-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 11 19:44:54.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7284 delete e2e-test-crd-publish-openapi-4946-crds test-cr'
Mar 11 19:44:55.103: INFO: stderr: ""
Mar 11 19:44:55.103: INFO: stdout: "e2e-test-crd-publish-openapi-4946-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Mar 11 19:44:55.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7284 apply -f -'
Mar 11 19:44:55.335: INFO: stderr: ""
Mar 11 19:44:55.336: INFO: stdout: "e2e-test-crd-publish-openapi-4946-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 11 19:44:55.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7284 delete e2e-test-crd-publish-openapi-4946-crds test-cr'
Mar 11 19:44:55.498: INFO: stderr: ""
Mar 11 19:44:55.498: INFO: stdout: "e2e-test-crd-publish-openapi-4946-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Mar 11 19:44:55.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4946-crds'
Mar 11 19:44:55.725: INFO: stderr: ""
Mar 11 19:44:55.725: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4946-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:44:58.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7284" for this suite.

• [SLOW TEST:12.124 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":173,"skipped":2959,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:44:58.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-1997
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Mar 11 19:44:58.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Mar 11 19:45:15.361: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:45:22.290: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:45:38.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1997" for this suite.

• [SLOW TEST:40.281 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":174,"skipped":2964,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:45:38.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-50
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Mar 11 19:45:44.096: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:45:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-50" for this suite.

• [SLOW TEST:6.180 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":175,"skipped":2969,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:45:45.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6678
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-6db336c5-e7b2-4679-b2bc-489478a6d2d8
STEP: Creating a pod to test consume configMaps
Mar 11 19:45:45.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d" in namespace "projected-6678" to be "Succeeded or Failed"
Mar 11 19:45:45.257: INFO: Pod "pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53595ms
Mar 11 19:45:47.261: INFO: Pod "pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006504636s
Mar 11 19:45:49.264: INFO: Pod "pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01000719s
STEP: Saw pod success
Mar 11 19:45:49.264: INFO: Pod "pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d" satisfied condition "Succeeded or Failed"
Mar 11 19:45:49.267: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:45:49.290: INFO: Waiting for pod pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d to disappear
Mar 11 19:45:49.292: INFO: Pod pod-projected-configmaps-88be5278-7467-43a6-9b83-d7f9baa2515d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:45:49.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6678" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2989,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:45:49.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-7848
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:45:49.433: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb" in namespace "security-context-test-7848" to be "Succeeded or Failed"
Mar 11 19:45:49.435: INFO: Pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156301ms
Mar 11 19:45:51.439: INFO: Pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005946907s
Mar 11 19:45:53.443: INFO: Pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009930349s
Mar 11 19:45:55.447: INFO: Pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014657369s
Mar 11 19:45:55.447: INFO: Pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb" satisfied condition "Succeeded or Failed"
Mar 11 19:45:55.458: INFO: Got logs for pod "busybox-privileged-false-42a62d4c-eca3-496a-a35a-639b4a35ccbb": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:45:55.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7848" for this suite.

• [SLOW TEST:6.164 seconds]
[k8s.io] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3034,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:45:55.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8432
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:46:01.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8432" for this suite.

• [SLOW TEST:6.156 seconds]
[k8s.io] Kubelet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3059,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:46:01.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3190
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Mar 11 19:46:01.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-3190 -- logs-generator --log-lines-total 100 --run-duration 20s'
Mar 11 19:46:01.899: INFO: stderr: ""
Mar 11 19:46:01.899: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Mar 11 19:46:01.899: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Mar 11 19:46:01.899: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3190" to be "running and ready, or succeeded"
Mar 11 19:46:01.901: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371742ms
Mar 11 19:46:03.904: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005579856s
Mar 11 19:46:05.908: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.009083385s
Mar 11 19:46:05.908: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Mar 11 19:46:05.908: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Mar 11 19:46:05.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190'
Mar 11 19:46:06.079: INFO: stderr: ""
Mar 11 19:46:06.079: INFO: stdout: "I0311 19:46:04.145736       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/2j2 311\nI0311 19:46:04.345901       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/l6n 250\nI0311 19:46:04.545906       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/chw 568\nI0311 19:46:04.745877       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/lsq 538\nI0311 19:46:04.945958       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/fx6 334\nI0311 19:46:05.145917       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/xmt 437\nI0311 19:46:05.345912       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/t5v 388\nI0311 19:46:05.545950       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/68cx 451\nI0311 19:46:05.745924       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/2s5x 220\nI0311 19:46:05.945921       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/9bg 294\n"
STEP: limiting log lines
Mar 11 19:46:06.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190 --tail=1'
Mar 11 19:46:06.236: INFO: stderr: ""
Mar 11 19:46:06.236: INFO: stdout: "I0311 19:46:06.145951       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/vjzw 561\n"
Mar 11 19:46:06.236: INFO: got output "I0311 19:46:06.145951       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/vjzw 561\n"
STEP: limiting log bytes
Mar 11 19:46:06.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190 --limit-bytes=1'
Mar 11 19:46:06.391: INFO: stderr: ""
Mar 11 19:46:06.391: INFO: stdout: "I"
Mar 11 19:46:06.391: INFO: got output "I"
STEP: exposing timestamps
Mar 11 19:46:06.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190 --tail=1 --timestamps'
Mar 11 19:46:06.544: INFO: stderr: ""
Mar 11 19:46:06.544: INFO: stdout: "2021-03-11T19:46:06.346069207Z I0311 19:46:06.345897       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/k5d 344\n"
Mar 11 19:46:06.544: INFO: got output "2021-03-11T19:46:06.346069207Z I0311 19:46:06.345897       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/k5d 344\n"
STEP: restricting to a time range
Mar 11 19:46:09.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190 --since=1s'
Mar 11 19:46:09.192: INFO: stderr: ""
Mar 11 19:46:09.192: INFO: stdout: "I0311 19:46:08.345928       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/6l64 554\nI0311 19:46:08.545903       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/6zb 311\nI0311 19:46:08.745928       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/kt7g 287\nI0311 19:46:08.945917       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/7lcr 311\nI0311 19:46:09.145911       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/8td 202\n"
Mar 11 19:46:09.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3190 --since=24h'
Mar 11 19:46:09.346: INFO: stderr: ""
Mar 11 19:46:09.346: INFO: stdout: "I0311 19:46:04.145736       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/2j2 311\nI0311 19:46:04.345901       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/l6n 250\nI0311 19:46:04.545906       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/chw 568\nI0311 19:46:04.745877       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/lsq 538\nI0311 19:46:04.945958       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/fx6 334\nI0311 19:46:05.145917       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/xmt 437\nI0311 19:46:05.345912       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/t5v 388\nI0311 19:46:05.545950       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/68cx 451\nI0311 19:46:05.745924       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/2s5x 220\nI0311 19:46:05.945921       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/9bg 294\nI0311 19:46:06.145951       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/vjzw 561\nI0311 19:46:06.345897       1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/k5d 344\nI0311 19:46:06.545883       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/d92 565\nI0311 19:46:06.745899       1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/gwxj 285\nI0311 19:46:06.945917       1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/tq8 393\nI0311 19:46:07.145897       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/nr7 594\nI0311 19:46:07.345875       1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/wz4 392\nI0311 19:46:07.545888       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4p8r 510\nI0311 19:46:07.745906       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/k2d 384\nI0311 19:46:07.945924       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/9fp 422\nI0311 19:46:08.145902       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/hjls 525\nI0311 19:46:08.345928       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/6l64 554\nI0311 19:46:08.545903       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/6zb 311\nI0311 19:46:08.745928       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/kt7g 287\nI0311 19:46:08.945917       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/7lcr 311\nI0311 19:46:09.145911       1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/8td 202\n"
[AfterEach] Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Mar 11 19:46:09.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3190'
Mar 11 19:46:16.498: INFO: stderr: ""
Mar 11 19:46:16.498: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:46:16.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3190" for this suite.

• [SLOW TEST:14.885 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":179,"skipped":3084,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:46:16.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3955
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:46:16.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 11 19:46:24.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3955 create -f -'
Mar 11 19:46:24.938: INFO: stderr: ""
Mar 11 19:46:24.938: INFO: stdout: "e2e-test-crd-publish-openapi-2429-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 11 19:46:24.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3955 delete e2e-test-crd-publish-openapi-2429-crds test-cr'
Mar 11 19:46:25.085: INFO: stderr: ""
Mar 11 19:46:25.085: INFO: stdout: "e2e-test-crd-publish-openapi-2429-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Mar 11 19:46:25.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3955 apply -f -'
Mar 11 19:46:25.330: INFO: stderr: ""
Mar 11 19:46:25.330: INFO: stdout: "e2e-test-crd-publish-openapi-2429-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 11 19:46:25.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3955 delete e2e-test-crd-publish-openapi-2429-crds test-cr'
Mar 11 19:46:25.494: INFO: stderr: ""
Mar 11 19:46:25.494: INFO: stdout: "e2e-test-crd-publish-openapi-2429-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar 11 19:46:25.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2429-crds'
Mar 11 19:46:25.742: INFO: stderr: ""
Mar 11 19:46:25.742: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2429-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:46:28.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3955" for this suite.

• [SLOW TEST:12.169 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":180,"skipped":3090,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:46:28.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-5974
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Mar 11 19:46:28.800: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 19:46:36.734: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:46:53.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5974" for this suite.

• [SLOW TEST:24.691 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":181,"skipped":3102,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:46:53.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in limitrange-5313
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Mar 11 19:46:53.497: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Mar 11 19:46:53.501: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Mar 11 19:46:53.501: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Mar 11 19:46:53.515: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Mar 11 19:46:53.515: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Mar 11 19:46:53.527: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Mar 11 19:46:53.527: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Mar 11 19:47:00.578: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:47:00.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5313" for this suite.

• [SLOW TEST:7.230 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":182,"skipped":3107,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:47:00.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4426
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Mar 11 19:47:00.722: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 11 19:47:00.736: INFO: Waiting for terminating namespaces to be deleted...
Mar 11 19:47:00.738: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Mar 11 19:47:00.752: INFO: pod-no-resources from limitrange-5313 started at 2021-03-11 19:46:53 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.752: INFO: 	Container pause ready: true, restart count 0
Mar 11 19:47:00.752: INFO: collectd-4rvsd from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.752: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:47:00.753: INFO: cmk-s6v97 from kube-system started at 2021-03-11 18:03:34 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:47:00.753: INFO: nginx-proxy-node1 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:47:00.753: INFO: node-feature-discovery-worker-nf56t from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:47:00.753: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:47:00.753: INFO: prometheus-k8s-0 from monitoring started at 2021-03-11 18:04:37 +0000 UTC (5 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container grafana ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 19:47:00.753: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 19:47:00.753: INFO: kube-multus-ds-amd64-gtmmz from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:47:00.753: INFO: kube-flannel-8pz9c from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:47:00.753: INFO: cmk-init-discover-node2-29mrv from kube-system started at 2021-03-11 18:03:13 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:47:00.753: INFO: 	Container init ready: false, restart count 0
Mar 11 19:47:00.753: INFO: 	Container install ready: false, restart count 0
Mar 11 19:47:00.753: INFO: cmk-webhook-888945845-2gpfq from kube-system started at 2021-03-11 18:03:34 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 19:47:00.753: INFO: node-exporter-mw629 from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:47:00.753: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:47:00.753: INFO: kube-proxy-5zz5g from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.753: INFO: 	Container kube-proxy ready: true, restart count 2
Mar 11 19:47:00.753: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Mar 11 19:47:00.768: INFO: nginx-proxy-node2 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:47:00.768: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Mar 11 19:47:00.768: INFO: pfpod from limitrange-5313 started at 2021-03-11 19:46:55 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container pause ready: true, restart count 0
Mar 11 19:47:00.768: INFO: kube-flannel-8wwvj from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:47:00.768: INFO: node-exporter-x6vqx from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:47:00.768: INFO: collectd-86ww6 from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:47:00.768: INFO: node-feature-discovery-worker-8xdg7 from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:47:00.768: INFO: kube-multus-ds-amd64-rpm89 from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:47:00.768: INFO: prometheus-operator-f66f5fb4d-f2pkm from monitoring started at 2021-03-11 18:04:21 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container prometheus-operator ready: true, restart count 0
Mar 11 19:47:00.768: INFO: kube-proxy-znx8n from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:47:00.768: INFO: cmk-init-discover-node2-c5j6h from kube-system started at 2021-03-11 18:02:02 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container init ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container install ready: false, restart count 0
Mar 11 19:47:00.768: INFO: cmk-init-discover-node2-qbc6m from kube-system started at 2021-03-11 18:02:53 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container init ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container install ready: false, restart count 0
Mar 11 19:47:00.768: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz from monitoring started at 2021-03-11 18:07:22 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 19:47:00.768: INFO: cmk-slzjv from kube-system started at 2021-03-11 18:03:33 +0000 UTC (2 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:47:00.768: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:47:00.768: INFO: kubernetes-dashboard-57777fbdcb-zsnff from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Mar 11 19:47:00.768: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:47:00.768: INFO: cmk-init-discover-node2-9knwq from kube-system started at 2021-03-11 18:02:23 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container init ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container install ready: false, restart count 0
Mar 11 19:47:00.768: INFO: pod-partial-resources from limitrange-5313 started at 2021-03-11 19:46:53 +0000 UTC (1 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container pause ready: true, restart count 0
Mar 11 19:47:00.768: INFO: cmk-init-discover-node1-vk7wm from kube-system started at 2021-03-11 18:01:40 +0000 UTC (3 container statuses recorded)
Mar 11 19:47:00.768: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container init ready: false, restart count 0
Mar 11 19:47:00.768: INFO: 	Container install ready: false, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node node1
STEP: verifying the node has the label node node2
Mar 11 19:47:06.875: INFO: Pod cmk-s6v97 requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod cmk-slzjv requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod cmk-webhook-888945845-2gpfq requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod kube-flannel-8pz9c requesting resource cpu=150m on Node node1
Mar 11 19:47:06.875: INFO: Pod kube-flannel-8wwvj requesting resource cpu=150m on Node node2
Mar 11 19:47:06.875: INFO: Pod kube-multus-ds-amd64-gtmmz requesting resource cpu=100m on Node node1
Mar 11 19:47:06.875: INFO: Pod kube-multus-ds-amd64-rpm89 requesting resource cpu=100m on Node node2
Mar 11 19:47:06.875: INFO: Pod kube-proxy-5zz5g requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod kube-proxy-znx8n requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod kubernetes-dashboard-57777fbdcb-zsnff requesting resource cpu=50m on Node node2
Mar 11 19:47:06.875: INFO: Pod kubernetes-metrics-scraper-54fbb4d595-dq4gp requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1
Mar 11 19:47:06.875: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2
Mar 11 19:47:06.875: INFO: Pod node-feature-discovery-worker-8xdg7 requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod node-feature-discovery-worker-nf56t requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod pfpod requesting resource cpu=10m on Node node2
Mar 11 19:47:06.875: INFO: Pod collectd-4rvsd requesting resource cpu=0m on Node node1
Mar 11 19:47:06.875: INFO: Pod collectd-86ww6 requesting resource cpu=0m on Node node2
Mar 11 19:47:06.875: INFO: Pod node-exporter-mw629 requesting resource cpu=112m on Node node1
Mar 11 19:47:06.875: INFO: Pod node-exporter-x6vqx requesting resource cpu=112m on Node node2
Mar 11 19:47:06.875: INFO: Pod prometheus-k8s-0 requesting resource cpu=300m on Node node1
Mar 11 19:47:06.875: INFO: Pod prometheus-operator-f66f5fb4d-f2pkm requesting resource cpu=100m on Node node2
Mar 11 19:47:06.875: INFO: Pod tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz requesting resource cpu=0m on Node node2
STEP: Starting Pods to consume most of the cluster CPU.
Mar 11 19:47:06.875: INFO: Creating a pod which consumes cpu=53517m on Node node2
Mar 11 19:47:06.886: INFO: Creating a pod which consumes cpu=53419m on Node node1
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170703d2014], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4426/filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614 to node1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170c281f66b], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.169/24]]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170c35deb9f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170e52086ad], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170ebba134f], Reason = [Created], Message = [Created container filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614.166b6170f1612fe9], Reason = [Started], Message = [Started container filler-pod-30f5d5fb-c901-44ba-9292-1331f43ab614]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b61706fa791d1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4426/filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a to node2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b6170c5bf9109], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.175/24]]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b6170c689aaa1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b6170e5866a65], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b6170eb2205c6], Reason = [Created], Message = [Created container filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a.166b6170f05a719f], Reason = [Started], Message = [Started container filler-pod-ab7c0294-71bb-4014-ad78-f44c967f055a]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.166b61715ff70df4], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.166b6171603ff435], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.]
STEP: removing the label node off the node node1
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node node2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:47:11.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4426" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:11.353 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":183,"skipped":3130,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:47:11.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-643
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-4716c2b6-21e2-4a5a-9526-4a01278b2716
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:47:12.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-643" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":184,"skipped":3138,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:47:12.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-7826
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:12.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7826" for this suite.

• [SLOW TEST:60.150 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3179,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:12.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7409
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Mar 11 19:48:12.376: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7409 /api/v1/namespaces/watch-7409/configmaps/e2e-watch-test-resource-version a51530a4-8114-41db-8ed9-1784abfa48c5 41355 0 2021-03-11 19:48:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-03-11 19:48:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:48:12.376: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7409 /api/v1/namespaces/watch-7409/configmaps/e2e-watch-test-resource-version a51530a4-8114-41db-8ed9-1784abfa48c5 41356 0 2021-03-11 19:48:12 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-03-11 19:48:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:12.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7409" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":186,"skipped":3191,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:12.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8906
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:48:12.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b" in namespace "projected-8906" to be "Succeeded or Failed"
Mar 11 19:48:12.525: INFO: Pod "downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.905093ms
Mar 11 19:48:14.529: INFO: Pod "downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005186478s
Mar 11 19:48:16.532: INFO: Pod "downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008116832s
STEP: Saw pod success
Mar 11 19:48:16.532: INFO: Pod "downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b" satisfied condition "Succeeded or Failed"
Mar 11 19:48:16.534: INFO: Trying to get logs from node node1 pod downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b container client-container: 
STEP: delete the pod
Mar 11 19:48:16.548: INFO: Waiting for pod downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b to disappear
Mar 11 19:48:16.550: INFO: Pod downwardapi-volume-7cd5bca9-f1e7-4e3b-9779-34599e38cf3b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:16.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8906" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3194,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:16.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3500
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:48:16.685: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Mar 11 19:48:21.688: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 11 19:48:21.688: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Mar 11 19:48:21.701: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-3500 /apis/apps/v1/namespaces/deployment-3500/deployments/test-cleanup-deployment a05251eb-9f21-4243-adc5-3741eda14f7e 41458 1 2021-03-11 19:48:21 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2021-03-11 19:48:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00434ca78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Mar 11 19:48:21.704: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-3500 /apis/apps/v1/namespaces/deployment-3500/replicasets/test-cleanup-deployment-b4867b47f 91413c28-2fa1-4331-b061-f9060b266cef 41460 1 2021-03-11 19:48:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a05251eb-9f21-4243-adc5-3741eda14f7e 0xc00434d2c0 0xc00434d2c1}] []  [{kube-controller-manager Update apps/v1 2021-03-11 19:48:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 53 50 53 49 101 98 45 57 102 50 49 45 52 50 52 51 45 97 100 99 53 45 51 55 52 49 101 100 97 49 52 102 55 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00434d3a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:48:21.704: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Mar 11 19:48:21.704: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-3500 /apis/apps/v1/namespaces/deployment-3500/replicasets/test-cleanup-controller bb108047-1b03-49e3-ba44-e447074f68d3 41459 1 2021-03-11 19:48:16 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a05251eb-9f21-4243-adc5-3741eda14f7e 0xc00434d157 0xc00434d158}] []  [{e2e.test Update apps/v1 2021-03-11 19:48:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2021-03-11 19:48:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 53 50 53 49 101 98 45 57 102 50 49 45 52 50 52 51 45 97 100 99 53 45 51 55 52 49 101 100 97 49 52 102 55 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00434d258  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 11 19:48:21.709: INFO: Pod "test-cleanup-controller-rw8fj" is available:
&Pod{ObjectMeta:{test-cleanup-controller-rw8fj test-cleanup-controller- deployment-3500 /api/v1/namespaces/deployment-3500/pods/test-cleanup-controller-rw8fj f71576ba-c6f2-42bc-8513-1b2e025f3c3b 41443 0 2021-03-11 19:48:16 +0000 UTC   map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.171"
    ],
    "mac": "66:2b:f5:14:e7:97",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.171"
    ],
    "mac": "66:2b:f5:14:e7:97",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-controller bb108047-1b03-49e3-ba44-e447074f68d3 0xc00434dab7 0xc00434dab8}] []  [{kube-controller-manager Update v1 2021-03-11 19:48:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 98 49 48 56 48 52 55 45 49 98 48 51 45 52 57 101 51 45 98 97 52 52 45 101 52 52 55 48 55 52 102 54 56 100 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:48:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:48:20 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 51 46 49 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-88g2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-88g2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-88g2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:48:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:48:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:48:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:48:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.171,StartTime:2021-03-11 19:48:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:48:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://70b7fc2f6502026ce718bb647224354ee64c63f4e7180b71f291e2bbf3384194,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:21.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3500" for this suite.

• [SLOW TEST:5.160 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":188,"skipped":3215,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:21.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-7402
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 11 19:48:25.875: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:25.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7402" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3224,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:25.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3688
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 11 19:48:26.025: INFO: Waiting up to 5m0s for pod "pod-08baaa3f-5d95-4555-91b7-1b6171d9885d" in namespace "emptydir-3688" to be "Succeeded or Failed"
Mar 11 19:48:26.028: INFO: Pod "pod-08baaa3f-5d95-4555-91b7-1b6171d9885d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.800471ms
Mar 11 19:48:28.035: INFO: Pod "pod-08baaa3f-5d95-4555-91b7-1b6171d9885d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010152451s
Mar 11 19:48:30.040: INFO: Pod "pod-08baaa3f-5d95-4555-91b7-1b6171d9885d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015045778s
STEP: Saw pod success
Mar 11 19:48:30.040: INFO: Pod "pod-08baaa3f-5d95-4555-91b7-1b6171d9885d" satisfied condition "Succeeded or Failed"
Mar 11 19:48:30.042: INFO: Trying to get logs from node node1 pod pod-08baaa3f-5d95-4555-91b7-1b6171d9885d container test-container: 
STEP: delete the pod
Mar 11 19:48:30.055: INFO: Waiting for pod pod-08baaa3f-5d95-4555-91b7-1b6171d9885d to disappear
Mar 11 19:48:30.057: INFO: Pod pod-08baaa3f-5d95-4555-91b7-1b6171d9885d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:30.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3688" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3231,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:30.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5771
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 11 19:48:30.199: INFO: Waiting up to 5m0s for pod "pod-e4a59984-0470-42d3-9ab4-88912155940c" in namespace "emptydir-5771" to be "Succeeded or Failed"
Mar 11 19:48:30.202: INFO: Pod "pod-e4a59984-0470-42d3-9ab4-88912155940c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.689152ms
Mar 11 19:48:32.205: INFO: Pod "pod-e4a59984-0470-42d3-9ab4-88912155940c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006077736s
Mar 11 19:48:34.210: INFO: Pod "pod-e4a59984-0470-42d3-9ab4-88912155940c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011057404s
STEP: Saw pod success
Mar 11 19:48:34.210: INFO: Pod "pod-e4a59984-0470-42d3-9ab4-88912155940c" satisfied condition "Succeeded or Failed"
Mar 11 19:48:34.213: INFO: Trying to get logs from node node2 pod pod-e4a59984-0470-42d3-9ab4-88912155940c container test-container: 
STEP: delete the pod
Mar 11 19:48:34.232: INFO: Waiting for pod pod-e4a59984-0470-42d3-9ab4-88912155940c to disappear
Mar 11 19:48:34.234: INFO: Pod pod-e4a59984-0470-42d3-9ab4-88912155940c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:34.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5771" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3240,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:34.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-8307
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 11 19:48:38.401: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:38.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8307" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3277,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:38.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7027
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:38.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7027" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":193,"skipped":3293,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:38.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-7427
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:48:38.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7427" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3321,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:48:38.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1907
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 11 19:48:38.844: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:38.845: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:38.845: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:38.846: INFO: Number of nodes with available pods: 0
Mar 11 19:48:38.846: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:39.850: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:39.850: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:39.850: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:39.853: INFO: Number of nodes with available pods: 0
Mar 11 19:48:39.853: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:40.851: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:40.851: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:40.851: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:40.854: INFO: Number of nodes with available pods: 0
Mar 11 19:48:40.854: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:41.852: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:41.852: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:41.852: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:41.855: INFO: Number of nodes with available pods: 0
Mar 11 19:48:41.855: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:42.852: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.852: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.852: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.855: INFO: Number of nodes with available pods: 2
Mar 11 19:48:42.855: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Mar 11 19:48:42.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:42.870: INFO: Number of nodes with available pods: 1
Mar 11 19:48:42.870: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:43.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:43.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:43.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:43.877: INFO: Number of nodes with available pods: 1
Mar 11 19:48:43.877: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:44.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:44.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:44.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:44.878: INFO: Number of nodes with available pods: 1
Mar 11 19:48:44.878: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:45.874: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:45.874: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:45.874: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:45.876: INFO: Number of nodes with available pods: 1
Mar 11 19:48:45.876: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:46.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:46.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:46.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:46.878: INFO: Number of nodes with available pods: 1
Mar 11 19:48:46.878: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:47.878: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:47.878: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:47.878: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:47.882: INFO: Number of nodes with available pods: 1
Mar 11 19:48:47.882: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:48.878: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:48.878: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:48.878: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:48.882: INFO: Number of nodes with available pods: 1
Mar 11 19:48:48.882: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:49.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:49.876: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:49.876: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:49.879: INFO: Number of nodes with available pods: 1
Mar 11 19:48:49.879: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:50.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:50.876: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:50.876: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:50.879: INFO: Number of nodes with available pods: 1
Mar 11 19:48:50.879: INFO: Node node1 is running more than one daemon pod
Mar 11 19:48:51.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:51.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:51.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar 11 19:48:51.879: INFO: Number of nodes with available pods: 2
Mar 11 19:48:51.879: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1907, will wait for the garbage collector to delete the pods
Mar 11 19:48:51.939: INFO: Deleting DaemonSet.extensions daemon-set took: 4.824791ms
Mar 11 19:48:52.539: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.344222ms
Mar 11 19:49:06.442: INFO: Number of nodes with available pods: 0
Mar 11 19:49:06.442: INFO: Number of running nodes: 0, number of available pods: 0
Mar 11 19:49:06.445: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1907/daemonsets","resourceVersion":"41893"},"items":null}

Mar 11 19:49:06.447: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1907/pods","resourceVersion":"41893"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:06.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1907" for this suite.

• [SLOW TEST:27.765 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":195,"skipped":3324,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:06.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9985
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Mar 11 19:49:06.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9985'
Mar 11 19:49:06.889: INFO: stderr: ""
Mar 11 19:49:06.889: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 11 19:49:07.895: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:49:07.895: INFO: Found 0 / 1
Mar 11 19:49:08.895: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:49:08.895: INFO: Found 0 / 1
Mar 11 19:49:09.894: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:49:09.894: INFO: Found 1 / 1
Mar 11 19:49:09.895: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Mar 11 19:49:09.898: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:49:09.898: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 11 19:49:09.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-vsz2z --namespace=kubectl-9985 -p {"metadata":{"annotations":{"x":"y"}}}'
Mar 11 19:49:10.064: INFO: stderr: ""
Mar 11 19:49:10.065: INFO: stdout: "pod/agnhost-master-vsz2z patched\n"
STEP: checking annotations
Mar 11 19:49:10.067: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:49:10.067: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:10.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9985" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":196,"skipped":3327,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:10.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7020
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Mar 11 19:49:10.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 41931 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:49:10.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 41932 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:49:10.216: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 41933 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Mar 11 19:49:20.242: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 42013 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:49:20.242: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 42014 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 19:49:20.242: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7020 /api/v1/namespaces/watch-7020/configmaps/e2e-watch-test-label-changed d4dfa1c7-9329-47f7-b8bb-163f3d93ff1d 42015 0 2021-03-11 19:49:10 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2021-03-11 19:49:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:20.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7020" for this suite.

• [SLOW TEST:10.173 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":197,"skipped":3341,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:20.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9494
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Mar 11 19:49:20.373: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Mar 11 19:49:20.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:20.582: INFO: stderr: ""
Mar 11 19:49:20.582: INFO: stdout: "service/agnhost-slave created\n"
Mar 11 19:49:20.583: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Mar 11 19:49:20.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:20.824: INFO: stderr: ""
Mar 11 19:49:20.824: INFO: stdout: "service/agnhost-master created\n"
Mar 11 19:49:20.824: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Mar 11 19:49:20.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:21.062: INFO: stderr: ""
Mar 11 19:49:21.062: INFO: stdout: "service/frontend created\n"
Mar 11 19:49:21.062: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Mar 11 19:49:21.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:21.253: INFO: stderr: ""
Mar 11 19:49:21.253: INFO: stdout: "deployment.apps/frontend created\n"
Mar 11 19:49:21.253: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar 11 19:49:21.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:21.479: INFO: stderr: ""
Mar 11 19:49:21.479: INFO: stdout: "deployment.apps/agnhost-master created\n"
Mar 11 19:49:21.480: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar 11 19:49:21.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9494'
Mar 11 19:49:21.708: INFO: stderr: ""
Mar 11 19:49:21.708: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 11 19:49:21.708: INFO: Waiting for all frontend pods to be Running.
Mar 11 19:49:26.759: INFO: Waiting for frontend to serve content.
Mar 11 19:49:26.767: INFO: Trying to add a new entry to the guestbook.
Mar 11 19:49:26.778: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar 11 19:49:26.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:26.926: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:26.926: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 11 19:49:26.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:27.061: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:27.061: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 11 19:49:27.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:27.196: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:27.196: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 11 19:49:27.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:27.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:27.335: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 11 19:49:27.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:27.442: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:27.442: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 11 19:49:27.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9494'
Mar 11 19:49:27.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:49:27.558: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:27.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9494" for this suite.

• [SLOW TEST:7.316 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":198,"skipped":3381,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:27.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1462
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 19:49:27.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f" in namespace "projected-1462" to be "Succeeded or Failed"
Mar 11 19:49:27.706: INFO: Pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600639ms
Mar 11 19:49:29.710: INFO: Pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006345949s
Mar 11 19:49:31.714: INFO: Pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010431016s
Mar 11 19:49:33.718: INFO: Pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014631345s
STEP: Saw pod success
Mar 11 19:49:33.718: INFO: Pod "downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f" satisfied condition "Succeeded or Failed"
Mar 11 19:49:33.721: INFO: Trying to get logs from node node2 pod downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f container client-container: 
STEP: delete the pod
Mar 11 19:49:33.738: INFO: Waiting for pod downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f to disappear
Mar 11 19:49:33.740: INFO: Pod downwardapi-volume-7c0310dc-8de9-43bc-8b5f-a6f50bb76f9f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:33.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1462" for this suite.

• [SLOW TEST:6.179 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3420,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:33.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-165
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:49:57.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-165" for this suite.

• [SLOW TEST:24.139 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":200,"skipped":3423,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:49:57.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3360
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 11 19:50:06.062: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 11 19:50:06.064: INFO: Pod pod-with-poststart-http-hook still exists
Mar 11 19:50:08.067: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 11 19:50:08.070: INFO: Pod pod-with-poststart-http-hook still exists
Mar 11 19:50:10.065: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 11 19:50:10.068: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:50:10.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3360" for this suite.

• [SLOW TEST:12.189 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3476,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:50:10.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-723
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 11 19:50:10.214: INFO: Waiting up to 5m0s for pod "pod-8a8e7c3e-2674-42bb-af31-26b63951912f" in namespace "emptydir-723" to be "Succeeded or Failed"
Mar 11 19:50:10.217: INFO: Pod "pod-8a8e7c3e-2674-42bb-af31-26b63951912f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649622ms
Mar 11 19:50:12.223: INFO: Pod "pod-8a8e7c3e-2674-42bb-af31-26b63951912f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009064098s
Mar 11 19:50:14.229: INFO: Pod "pod-8a8e7c3e-2674-42bb-af31-26b63951912f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014741358s
STEP: Saw pod success
Mar 11 19:50:14.229: INFO: Pod "pod-8a8e7c3e-2674-42bb-af31-26b63951912f" satisfied condition "Succeeded or Failed"
Mar 11 19:50:14.232: INFO: Trying to get logs from node node1 pod pod-8a8e7c3e-2674-42bb-af31-26b63951912f container test-container: 
STEP: delete the pod
Mar 11 19:50:14.252: INFO: Waiting for pod pod-8a8e7c3e-2674-42bb-af31-26b63951912f to disappear
Mar 11 19:50:14.255: INFO: Pod pod-8a8e7c3e-2674-42bb-af31-26b63951912f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:50:14.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-723" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3481,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:50:14.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-9530
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-99895353-59fc-4d89-a625-2a4c07754cf2 in namespace container-probe-9530
Mar 11 19:50:18.406: INFO: Started pod liveness-99895353-59fc-4d89-a625-2a4c07754cf2 in namespace container-probe-9530
STEP: checking the pod's current state and verifying that restartCount is present
Mar 11 19:50:18.408: INFO: Initial restart count of pod liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is 0
Mar 11 19:50:36.442: INFO: Restart count of pod container-probe-9530/liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is now 1 (18.033006093s elapsed)
Mar 11 19:50:56.489: INFO: Restart count of pod container-probe-9530/liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is now 2 (38.080141497s elapsed)
Mar 11 19:51:16.524: INFO: Restart count of pod container-probe-9530/liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is now 3 (58.115679996s elapsed)
Mar 11 19:51:36.561: INFO: Restart count of pod container-probe-9530/liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is now 4 (1m18.152111822s elapsed)
Mar 11 19:52:46.684: INFO: Restart count of pod container-probe-9530/liveness-99895353-59fc-4d89-a625-2a4c07754cf2 is now 5 (2m28.275672844s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:52:46.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9530" for this suite.

• [SLOW TEST:152.435 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3537,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:52:46.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1738
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:52:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1738" for this suite.

• [SLOW TEST:7.139 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":204,"skipped":3580,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:52:53.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9939
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 11 19:52:53.975: INFO: Waiting up to 5m0s for pod "pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f" in namespace "emptydir-9939" to be "Succeeded or Failed"
Mar 11 19:52:53.977: INFO: Pod "pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.606027ms
Mar 11 19:52:55.980: INFO: Pod "pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005408532s
Mar 11 19:52:57.984: INFO: Pod "pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009569738s
STEP: Saw pod success
Mar 11 19:52:57.985: INFO: Pod "pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f" satisfied condition "Succeeded or Failed"
Mar 11 19:52:57.987: INFO: Trying to get logs from node node1 pod pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f container test-container: 
STEP: delete the pod
Mar 11 19:52:58.010: INFO: Waiting for pod pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f to disappear
Mar 11 19:52:58.012: INFO: Pod pod-f9f4ecac-75b4-4145-8cb9-a68e535bef8f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:52:58.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9939" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3581,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:52:58.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8023
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 11 19:53:04.675: INFO: Successfully updated pod "pod-update-3b582d25-7b37-4d4e-b79b-4a414770dee9"
STEP: verifying the updated pod is in kubernetes
Mar 11 19:53:04.680: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:53:04.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8023" for this suite.

• [SLOW TEST:6.671 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3583,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:53:04.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-878
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-878
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-878
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-878
Mar 11 19:53:04.823: INFO: Found 0 stateful pods, waiting for 1
Mar 11 19:53:14.827: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Mar 11 19:53:14.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:53:15.120: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:53:15.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:53:15.120: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 19:53:15.122: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 11 19:53:25.125: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 19:53:25.125: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:53:25.135: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:25.135: INFO: ss-0  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  }]
Mar 11 19:53:25.135: INFO: 
Mar 11 19:53:25.135: INFO: StatefulSet ss has not reached scale 3, at 1
Mar 11 19:53:26.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997450943s
Mar 11 19:53:27.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991490871s
Mar 11 19:53:28.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988361184s
Mar 11 19:53:29.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984992949s
Mar 11 19:53:30.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982079981s
Mar 11 19:53:31.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.976816861s
Mar 11 19:53:32.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.972043212s
Mar 11 19:53:33.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967740288s
Mar 11 19:53:34.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 961.728199ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-878
Mar 11 19:53:35.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 19:53:35.450: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 19:53:35.450: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 19:53:35.450: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 19:53:35.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 19:53:35.700: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar 11 19:53:35.700: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 19:53:35.700: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 19:53:35.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 19:53:35.965: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar 11 19:53:35.965: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 19:53:35.965: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 19:53:35.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:53:35.968: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 19:53:35.968: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Mar 11 19:53:35.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:53:36.234: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:53:36.234: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:53:36.234: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 19:53:36.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:53:36.475: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:53:36.475: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:53:36.475: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 19:53:36.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-878 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 19:53:36.724: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 19:53:36.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 19:53:36.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 19:53:36.724: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:53:36.726: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Mar 11 19:53:46.735: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 19:53:46.735: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 19:53:46.735: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 19:53:46.745: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:46.745: INFO: ss-0  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  }]
Mar 11 19:53:46.745: INFO: ss-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:46.745: INFO: ss-2  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:46.745: INFO: 
Mar 11 19:53:46.745: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 11 19:53:47.749: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:47.749: INFO: ss-0  node1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  }]
Mar 11 19:53:47.749: INFO: ss-1  node2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:47.749: INFO: ss-2  node2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:47.749: INFO: 
Mar 11 19:53:47.749: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 11 19:53:48.753: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:48.753: INFO: ss-0  node1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:04 +0000 UTC  }]
Mar 11 19:53:48.753: INFO: ss-1  node2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:48.753: INFO: ss-2  node2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:48.753: INFO: 
Mar 11 19:53:48.753: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 11 19:53:49.757: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:49.757: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:49.757: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:49.757: INFO: 
Mar 11 19:53:49.757: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:50.761: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:50.761: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:50.761: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:50.761: INFO: 
Mar 11 19:53:50.761: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:51.764: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:51.764: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:51.765: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:51.765: INFO: 
Mar 11 19:53:51.765: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:52.770: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:52.770: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:52.770: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:52.770: INFO: 
Mar 11 19:53:52.770: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:53.774: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:53.774: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:53.774: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:53.774: INFO: 
Mar 11 19:53:53.774: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:54.780: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:54.780: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:54.780: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:54.780: INFO: 
Mar 11 19:53:54.780: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 11 19:53:55.783: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Mar 11 19:53:55.783: INFO: ss-1  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:55.783: INFO: ss-2  node2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-11 19:53:25 +0000 UTC  }]
Mar 11 19:53:55.783: INFO: 
Mar 11 19:53:55.783: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-878
Mar 11 19:53:56.786: INFO: Scaling statefulset ss to 0
Mar 11 19:53:56.795: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 11 19:53:56.797: INFO: Deleting all statefulset in ns statefulset-878
Mar 11 19:53:56.800: INFO: Scaling statefulset ss to 0
Mar 11 19:53:56.808: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 19:53:56.811: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:53:56.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-878" for this suite.

• [SLOW TEST:52.140 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":207,"skipped":3627,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:53:56.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-9368
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9368.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 19:54:03.006: INFO: DNS probes using dns-9368/dns-test-bb1661a9-8f31-41d9-a732-516eb1d35141 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:03.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9368" for this suite.

• [SLOW TEST:6.198 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":208,"skipped":3627,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:03.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2749
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:54:03.589: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:54:05.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089243, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089243, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089243, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089243, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:54:08.609: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:08.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2749" for this suite.
STEP: Destroying namespace "webhook-2749-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.730 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":209,"skipped":3630,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:08.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2814
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Mar 11 19:54:08.899: INFO: Waiting up to 5m0s for pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f" in namespace "var-expansion-2814" to be "Succeeded or Failed"
Mar 11 19:54:08.901: INFO: Pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304631ms
Mar 11 19:54:10.905: INFO: Pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005927212s
Mar 11 19:54:12.913: INFO: Pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014129769s
Mar 11 19:54:14.918: INFO: Pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018718152s
STEP: Saw pod success
Mar 11 19:54:14.918: INFO: Pod "var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f" satisfied condition "Succeeded or Failed"
Mar 11 19:54:14.921: INFO: Trying to get logs from node node2 pod var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f container dapi-container: 
STEP: delete the pod
Mar 11 19:54:15.048: INFO: Waiting for pod var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f to disappear
Mar 11 19:54:15.050: INFO: Pod var-expansion-10ced988-ed34-40d0-8b3d-b854e6b2bd8f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:15.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2814" for this suite.

• [SLOW TEST:6.297 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3638,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:15.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2528
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Mar 11 19:54:15.179: INFO: namespace kubectl-2528
Mar 11 19:54:15.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2528'
Mar 11 19:54:15.417: INFO: stderr: ""
Mar 11 19:54:15.417: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 11 19:54:16.421: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:54:16.421: INFO: Found 0 / 1
Mar 11 19:54:17.420: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:54:17.420: INFO: Found 0 / 1
Mar 11 19:54:18.423: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:54:18.423: INFO: Found 1 / 1
Mar 11 19:54:18.423: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar 11 19:54:18.426: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 11 19:54:18.426: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 11 19:54:18.426: INFO: wait on agnhost-master startup in kubectl-2528 
Mar 11 19:54:18.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-vqdl6 agnhost-master --namespace=kubectl-2528'
Mar 11 19:54:18.585: INFO: stderr: ""
Mar 11 19:54:18.585: INFO: stdout: "Paused\n"
STEP: exposing RC
Mar 11 19:54:18.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2528'
Mar 11 19:54:18.767: INFO: stderr: ""
Mar 11 19:54:18.767: INFO: stdout: "service/rm2 exposed\n"
Mar 11 19:54:18.769: INFO: Service rm2 in namespace kubectl-2528 found.
STEP: exposing service
Mar 11 19:54:20.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2528'
Mar 11 19:54:20.976: INFO: stderr: ""
Mar 11 19:54:20.976: INFO: stdout: "service/rm3 exposed\n"
Mar 11 19:54:20.978: INFO: Service rm3 in namespace kubectl-2528 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:22.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2528" for this suite.

• [SLOW TEST:7.936 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":211,"skipped":3661,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:22.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-7798
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-7798" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":212,"skipped":3670,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:23.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1308
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-c3703de8-913a-4bfa-a9b7-54c6cf1e14a3
STEP: Creating a pod to test consume secrets
Mar 11 19:54:23.299: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5" in namespace "projected-1308" to be "Succeeded or Failed"
Mar 11 19:54:23.302: INFO: Pod "pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08797ms
Mar 11 19:54:25.306: INFO: Pod "pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006792429s
Mar 11 19:54:27.310: INFO: Pod "pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01043216s
STEP: Saw pod success
Mar 11 19:54:27.310: INFO: Pod "pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5" satisfied condition "Succeeded or Failed"
Mar 11 19:54:27.312: INFO: Trying to get logs from node node1 pod pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5 container projected-secret-volume-test: 
STEP: delete the pod
Mar 11 19:54:27.325: INFO: Waiting for pod pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5 to disappear
Mar 11 19:54:27.327: INFO: Pod pod-projected-secrets-7da7a8c3-a442-4316-9e22-e058041d9ca5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:27.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1308" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3683,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:27.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-412
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-412
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-412
STEP: creating replication controller externalsvc in namespace services-412
I0311 19:54:27.471174      12 runners.go:190] Created replication controller with name: externalsvc, namespace: services-412, replica count: 2
I0311 19:54:30.521665      12 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0311 19:54:33.522005      12 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Mar 11 19:54:33.534: INFO: Creating new exec pod
Mar 11 19:54:37.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-412 execpodc7ncd -- /bin/sh -x -c nslookup clusterip-service'
Mar 11 19:54:37.827: INFO: stderr: "+ nslookup clusterip-service\n"
Mar 11 19:54:37.827: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-412.svc.cluster.local\tcanonical name = externalsvc.services-412.svc.cluster.local.\nName:\texternalsvc.services-412.svc.cluster.local\nAddress: 10.233.44.64\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-412, will wait for the garbage collector to delete the pods
Mar 11 19:54:37.885: INFO: Deleting ReplicationController externalsvc took: 4.314253ms
Mar 11 19:54:37.985: INFO: Terminating ReplicationController externalsvc pods took: 100.42676ms
Mar 11 19:54:46.597: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:46.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-412" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:19.276 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":214,"skipped":3720,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:46.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7488
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Mar 11 19:54:46.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7488'
Mar 11 19:54:46.960: INFO: stderr: ""
Mar 11 19:54:46.960: INFO: stdout: "pod/pause created\n"
Mar 11 19:54:46.960: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Mar 11 19:54:46.960: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7488" to be "running and ready"
Mar 11 19:54:46.962: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13137ms
Mar 11 19:54:48.966: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006374571s
Mar 11 19:54:50.970: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010231334s
Mar 11 19:54:50.970: INFO: Pod "pause" satisfied condition "running and ready"
Mar 11 19:54:50.970: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Mar 11 19:54:50.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7488'
Mar 11 19:54:51.124: INFO: stderr: ""
Mar 11 19:54:51.124: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar 11 19:54:51.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7488'
Mar 11 19:54:51.282: INFO: stderr: ""
Mar 11 19:54:51.282: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Mar 11 19:54:51.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7488'
Mar 11 19:54:51.453: INFO: stderr: ""
Mar 11 19:54:51.453: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Mar 11 19:54:51.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7488'
Mar 11 19:54:51.597: INFO: stderr: ""
Mar 11 19:54:51.597: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Mar 11 19:54:51.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7488'
Mar 11 19:54:51.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 11 19:54:51.710: INFO: stdout: "pod \"pause\" force deleted\n"
Mar 11 19:54:51.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7488'
Mar 11 19:54:51.895: INFO: stderr: "No resources found in kubectl-7488 namespace.\n"
Mar 11 19:54:51.895: INFO: stdout: ""
Mar 11 19:54:51.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7488 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 11 19:54:52.050: INFO: stderr: ""
Mar 11 19:54:52.050: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:52.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7488" for this suite.

• [SLOW TEST:5.447 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":215,"skipped":3732,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:52.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-282
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-282/secret-test-f5dd1324-362d-4bd5-845f-607ef18ca968
STEP: Creating a pod to test consume secrets
Mar 11 19:54:52.199: INFO: Waiting up to 5m0s for pod "pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f" in namespace "secrets-282" to be "Succeeded or Failed"
Mar 11 19:54:52.201: INFO: Pod "pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219227ms
Mar 11 19:54:54.204: INFO: Pod "pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00508068s
Mar 11 19:54:56.208: INFO: Pod "pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008735413s
STEP: Saw pod success
Mar 11 19:54:56.208: INFO: Pod "pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f" satisfied condition "Succeeded or Failed"
Mar 11 19:54:56.210: INFO: Trying to get logs from node node2 pod pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f container env-test: 
STEP: delete the pod
Mar 11 19:54:56.275: INFO: Waiting for pod pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f to disappear
Mar 11 19:54:56.277: INFO: Pod pod-configmaps-4eff1676-73a1-4b4a-b498-dd0ea856ce6f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:54:56.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-282" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3739,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:54:56.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-961
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:54:56.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:54:58.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089296, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089296, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089296, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089296, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:55:01.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 19:55:01.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:07.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-961" for this suite.
STEP: Destroying namespace "webhook-961-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.597 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":217,"skipped":3747,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:07.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9961
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-b071216c-5236-4dad-995d-2076ab4ecac3
STEP: Creating a pod to test consume configMaps
Mar 11 19:55:08.023: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f" in namespace "configmap-9961" to be "Succeeded or Failed"
Mar 11 19:55:08.026: INFO: Pod "pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.903823ms
Mar 11 19:55:10.029: INFO: Pod "pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005600285s
Mar 11 19:55:12.033: INFO: Pod "pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009569935s
STEP: Saw pod success
Mar 11 19:55:12.033: INFO: Pod "pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f" satisfied condition "Succeeded or Failed"
Mar 11 19:55:12.035: INFO: Trying to get logs from node node2 pod pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f container configmap-volume-test: 
STEP: delete the pod
Mar 11 19:55:12.049: INFO: Waiting for pod pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f to disappear
Mar 11 19:55:12.051: INFO: Pod pod-configmaps-f3c2ff1a-b661-4240-afea-aa25fd06a81f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:12.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9961" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3766,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:12.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7147
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 19:55:12.371: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 19:55:14.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 19:55:16.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089312, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 19:55:19.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:19.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7147" for this suite.
STEP: Destroying namespace "webhook-7147-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.406 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":219,"skipped":3784,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:19.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-2186
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0311 19:55:21.124184      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 19:55:21.124: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:21.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2186" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":220,"skipped":3801,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:21.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-5658
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar 11 19:55:25.281: INFO: &Pod{ObjectMeta:{send-events-ced69ef5-6f70-4c99-a1e8-f3d8f281b371  events-5658 /api/v1/namespaces/events-5658/pods/send-events-ced69ef5-6f70-4c99-a1e8-f3d8f281b371 204e3661-4837-4aff-b110-bf763f9ad8c6 44854 0 2021-03-11 19:55:21 +0000 UTC   map[name:foo time:256996237] map[k8s.v1.cni.cncf.io/network-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.200"
    ],
    "mac": "0a:da:d9:28:aa:70",
    "default": true,
    "dns": {}
}] k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.200"
    ],
    "mac": "0a:da:d9:28:aa:70",
    "default": true,
    "dns": {}
}] kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-03-11 19:55:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {multus Update v1 2021-03-11 19:55:22 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 45 115 116 97 116 117 115 34 58 123 125 44 34 102 58 107 56 115 46 118 49 46 99 110 105 46 99 110 99 102 46 105 111 47 110 101 116 119 111 114 107 115 45 115 116 97 116 117 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 19:55:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 52 46 50 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r6x6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r6x6v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r6x6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:55:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:55:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:55:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-11 19:55:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.200,StartTime:2021-03-11 19:55:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-11 19:55:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:docker-pullable://us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:docker://239ac42b8b1471226a5a828748ba5ed2bfac3cb776117023d8fe7fe65d72d661,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Mar 11 19:55:27.286: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar 11 19:55:29.290: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:29.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5658" for this suite.

• [SLOW TEST:8.169 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":221,"skipped":3815,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:29.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-6637
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Mar 11 19:55:29.437: INFO: Waiting up to 5m0s for pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953" in namespace "var-expansion-6637" to be "Succeeded or Failed"
Mar 11 19:55:29.439: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Pending", Reason="", readiness=false. Elapsed: 1.856647ms
Mar 11 19:55:31.442: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005150261s
Mar 11 19:55:33.446: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008341773s
Mar 11 19:55:35.450: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012598639s
Mar 11 19:55:37.454: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016309009s
Mar 11 19:55:39.457: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.019418938s
STEP: Saw pod success
Mar 11 19:55:39.457: INFO: Pod "var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953" satisfied condition "Succeeded or Failed"
Mar 11 19:55:39.459: INFO: Trying to get logs from node node1 pod var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953 container dapi-container: 
STEP: delete the pod
Mar 11 19:55:39.473: INFO: Waiting for pod var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953 to disappear
Mar 11 19:55:39.475: INFO: Pod var-expansion-d55c6b80-ba54-444a-827e-b1b3530c0953 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:39.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6637" for this suite.

• [SLOW TEST:10.179 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3834,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:39.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-9154
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-9154
STEP: creating replication controller nodeport-test in namespace services-9154
I0311 19:55:39.615943      12 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9154, replica count: 2
I0311 19:55:42.666901      12 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0311 19:55:45.667513      12 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 11 19:55:45.667: INFO: Creating new exec pod
Mar 11 19:55:50.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9154 execpodl8dtv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Mar 11 19:55:50.986: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Mar 11 19:55:50.986: INFO: stdout: ""
Mar 11 19:55:50.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9154 execpodl8dtv -- /bin/sh -x -c nc -zv -t -w 2 10.233.1.157 80'
Mar 11 19:55:51.256: INFO: stderr: "+ nc -zv -t -w 2 10.233.1.157 80\nConnection to 10.233.1.157 80 port [tcp/http] succeeded!\n"
Mar 11 19:55:51.256: INFO: stdout: ""
Mar 11 19:55:51.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9154 execpodl8dtv -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.207 30842'
Mar 11 19:55:51.526: INFO: stderr: "+ nc -zv -t -w 2 10.10.190.207 30842\nConnection to 10.10.190.207 30842 port [tcp/30842] succeeded!\n"
Mar 11 19:55:51.527: INFO: stdout: ""
Mar 11 19:55:51.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9154 execpodl8dtv -- /bin/sh -x -c nc -zv -t -w 2 10.10.190.208 30842'
Mar 11 19:55:51.796: INFO: stderr: "+ nc -zv -t -w 2 10.10.190.208 30842\nConnection to 10.10.190.208 30842 port [tcp/30842] succeeded!\n"
Mar 11 19:55:51.796: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:51.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9154" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.323 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":223,"skipped":3835,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:51.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-708
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-3011010f-584b-44c3-8025-5bf13c1e09e3
STEP: Creating a pod to test consume configMaps
Mar 11 19:55:51.944: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1" in namespace "projected-708" to be "Succeeded or Failed"
Mar 11 19:55:51.946: INFO: Pod "pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.526945ms
Mar 11 19:55:53.950: INFO: Pod "pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006094918s
Mar 11 19:55:55.954: INFO: Pod "pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010406855s
STEP: Saw pod success
Mar 11 19:55:55.954: INFO: Pod "pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1" satisfied condition "Succeeded or Failed"
Mar 11 19:55:55.957: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 19:55:55.969: INFO: Waiting for pod pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1 to disappear
Mar 11 19:55:55.971: INFO: Pod pod-projected-configmaps-744a340b-d319-43eb-9e39-7607d54dcdc1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 19:55:55.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-708" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3844,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 19:55:55.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9647
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Mar 11 19:55:56.104: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 11 19:55:56.126: INFO: Waiting for terminating namespaces to be deleted...
Mar 11 19:55:56.128: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Mar 11 19:55:56.141: INFO: kube-proxy-5zz5g from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container kube-proxy ready: true, restart count 2
Mar 11 19:55:56.141: INFO: kube-flannel-8pz9c from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:55:56.141: INFO: cmk-init-discover-node2-29mrv from kube-system started at 2021-03-11 18:03:13 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:55:56.141: INFO: 	Container init ready: false, restart count 0
Mar 11 19:55:56.141: INFO: 	Container install ready: false, restart count 0
Mar 11 19:55:56.141: INFO: cmk-webhook-888945845-2gpfq from kube-system started at 2021-03-11 18:03:34 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 19:55:56.141: INFO: node-exporter-mw629 from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:55:56.141: INFO: collectd-4rvsd from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:55:56.141: INFO: nginx-proxy-node1 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:55:56.141: INFO: cmk-s6v97 from kube-system started at 2021-03-11 18:03:34 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:55:56.141: INFO: execpodl8dtv from services-9154 started at 2021-03-11 19:55:45 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container agnhost-pause ready: true, restart count 0
Mar 11 19:55:56.141: INFO: kube-multus-ds-amd64-gtmmz from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:55:56.141: INFO: node-feature-discovery-worker-nf56t from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:55:56.141: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:55:56.141: INFO: prometheus-k8s-0 from monitoring started at 2021-03-11 18:04:37 +0000 UTC (5 container statuses recorded)
Mar 11 19:55:56.141: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container grafana ready: true, restart count 0
Mar 11 19:55:56.141: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 19:55:56.142: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 19:55:56.142: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 19:55:56.142: INFO: nodeport-test-lvc9l from services-9154 started at 2021-03-11 19:55:39 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.142: INFO: 	Container nodeport-test ready: true, restart count 0
Mar 11 19:55:56.142: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Mar 11 19:55:56.157: INFO: cmk-init-discover-node2-qbc6m from kube-system started at 2021-03-11 18:02:53 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container init ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container install ready: false, restart count 0
Mar 11 19:55:56.157: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz from monitoring started at 2021-03-11 18:07:22 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 19:55:56.157: INFO: kube-proxy-znx8n from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 19:55:56.157: INFO: cmk-init-discover-node2-c5j6h from kube-system started at 2021-03-11 18:02:02 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container init ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container install ready: false, restart count 0
Mar 11 19:55:56.157: INFO: cmk-slzjv from kube-system started at 2021-03-11 18:03:33 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 19:55:56.157: INFO: cmk-init-discover-node2-9knwq from kube-system started at 2021-03-11 18:02:23 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container init ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container install ready: false, restart count 0
Mar 11 19:55:56.157: INFO: kubernetes-dashboard-57777fbdcb-zsnff from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Mar 11 19:55:56.157: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 19:55:56.157: INFO: cmk-init-discover-node1-vk7wm from kube-system started at 2021-03-11 18:01:40 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container discover ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container init ready: false, restart count 0
Mar 11 19:55:56.157: INFO: 	Container install ready: false, restart count 0
Mar 11 19:55:56.157: INFO: send-events-ced69ef5-6f70-4c99-a1e8-f3d8f281b371 from events-5658 started at 2021-03-11 19:55:21 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container p ready: true, restart count 0
Mar 11 19:55:56.157: INFO: nginx-proxy-node2 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 19:55:56.157: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Mar 11 19:55:56.157: INFO: collectd-86ww6 from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container collectd ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 19:55:56.157: INFO: nodeport-test-4q5wc from services-9154 started at 2021-03-11 19:55:39 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container nodeport-test ready: true, restart count 0
Mar 11 19:55:56.157: INFO: kube-flannel-8wwvj from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 19:55:56.157: INFO: node-exporter-x6vqx from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 19:55:56.157: INFO: node-feature-discovery-worker-8xdg7 from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 19:55:56.157: INFO: kube-multus-ds-amd64-rpm89 from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 19:55:56.157: INFO: prometheus-operator-f66f5fb4d-f2pkm from monitoring started at 2021-03-11 18:04:21 +0000 UTC (2 container statuses recorded)
Mar 11 19:55:56.157: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 19:55:56.157: INFO: 	Container prometheus-operator ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-94aa1801-d87f-460c-b8fe-d23c0ebd2c5f 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-94aa1801-d87f-460c-b8fe-d23c0ebd2c5f off the node node1
STEP: verifying the node doesn't have the label kubernetes.io/e2e-94aa1801-d87f-460c-b8fe-d23c0ebd2c5f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:01:04.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9647" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:308.276 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":225,"skipped":3850,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:01:04.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1402
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 11 20:01:04.391: INFO: Waiting up to 5m0s for pod "pod-236ea314-8eb1-46e3-a314-db123f93155e" in namespace "emptydir-1402" to be "Succeeded or Failed"
Mar 11 20:01:04.393: INFO: Pod "pod-236ea314-8eb1-46e3-a314-db123f93155e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602722ms
Mar 11 20:01:06.400: INFO: Pod "pod-236ea314-8eb1-46e3-a314-db123f93155e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008969001s
Mar 11 20:01:08.404: INFO: Pod "pod-236ea314-8eb1-46e3-a314-db123f93155e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013611442s
STEP: Saw pod success
Mar 11 20:01:08.404: INFO: Pod "pod-236ea314-8eb1-46e3-a314-db123f93155e" satisfied condition "Succeeded or Failed"
Mar 11 20:01:08.407: INFO: Trying to get logs from node node2 pod pod-236ea314-8eb1-46e3-a314-db123f93155e container test-container: 
STEP: delete the pod
Mar 11 20:01:08.429: INFO: Waiting for pod pod-236ea314-8eb1-46e3-a314-db123f93155e to disappear
Mar 11 20:01:08.431: INFO: Pod pod-236ea314-8eb1-46e3-a314-db123f93155e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:01:08.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1402" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3857,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:01:08.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1176
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 20:01:08.905: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 20:01:10.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089668, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089668, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089668, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751089668, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 20:01:13.927: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:01:13.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5991-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:01:20.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1176" for this suite.
STEP: Destroying namespace "webhook-1176-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.650 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":227,"skipped":3876,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:01:20.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-600
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Mar 11 20:01:20.231: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-600" to be "Succeeded or Failed"
Mar 11 20:01:20.233: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010669ms
Mar 11 20:01:22.236: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004895643s
Mar 11 20:01:24.240: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00921641s
Mar 11 20:01:26.246: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015042285s
STEP: Saw pod success
Mar 11 20:01:26.246: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 11 20:01:26.249: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Mar 11 20:01:26.494: INFO: Waiting for pod pod-host-path-test to disappear
Mar 11 20:01:26.495: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:01:26.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-600" for this suite.

• [SLOW TEST:6.412 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3883,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:01:26.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5980
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-9d780f66-f577-43cb-8675-788b49290650
STEP: Creating a pod to test consume configMaps
Mar 11 20:01:26.641: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59" in namespace "projected-5980" to be "Succeeded or Failed"
Mar 11 20:01:26.643: INFO: Pod "pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.762943ms
Mar 11 20:01:28.648: INFO: Pod "pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007583299s
Mar 11 20:01:30.653: INFO: Pod "pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01271706s
STEP: Saw pod success
Mar 11 20:01:30.653: INFO: Pod "pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59" satisfied condition "Succeeded or Failed"
Mar 11 20:01:30.655: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59 container projected-configmap-volume-test: 
STEP: delete the pod
Mar 11 20:01:30.668: INFO: Waiting for pod pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59 to disappear
Mar 11 20:01:30.670: INFO: Pod pod-projected-configmaps-7273b603-dad1-45fd-8ba2-7d58ea40bf59 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:01:30.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5980" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":3919,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:01:30.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2494
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-d897a2ee-5eb3-4a8d-971b-77b4e802922e in namespace container-probe-2494
Mar 11 20:01:34.823: INFO: Started pod liveness-d897a2ee-5eb3-4a8d-971b-77b4e802922e in namespace container-probe-2494
STEP: checking the pod's current state and verifying that restartCount is present
Mar 11 20:01:34.825: INFO: Initial restart count of pod liveness-d897a2ee-5eb3-4a8d-971b-77b4e802922e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:05:35.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2494" for this suite.

• [SLOW TEST:244.735 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3920,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:05:35.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-296
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-296.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-296.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-296.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-296.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-296.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 11 20:05:41.584: INFO: DNS probes using dns-296/dns-test-ded52e3d-ca83-45b6-8055-150408c9986b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:05:41.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-296" for this suite.

• [SLOW TEST:6.186 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":231,"skipped":3937,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:05:41.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-8940
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:05:41.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8940" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":232,"skipped":3949,"failed":2,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:05:41.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9811
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:05:41.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 11 20:05:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -'
Mar 11 20:05:49.033: INFO: rc: 1
Mar 11 20:05:49.034: FAIL: failed to create random CR {"kind":"E2e-test-crd-publish-openapi-7231-crd","apiVersion":"crd-publish-openapi-test-unknown-in-nested.example.com/v1","metadata":{"name":"test-cr"},"spec":{"a":null,"b":[{"c":"d"}]}} for CRD that allows unknown properties in a nested object: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -:
Command stdout:

stderr:
error: error validating "STDIN": error validating data: unknown object type "nil" in E2e-test-crd-publish-openapi-7231-crd.spec.a; if you choose to ignore these errors, turn validation off with --validate=false

error:
exit status 1

Full Stack Trace
k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.4()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:235 +0xb35
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc00328ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc00328ac00, 0x4afad60)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "crd-publish-openapi-9811".
STEP: Found 0 events.
Mar 11 20:05:49.040: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 11 20:05:49.040: INFO: 
Mar 11 20:05:49.044: INFO: 
Logging node info for node master1
Mar 11 20:05:49.046: INFO: Node Info: &Node{ObjectMeta:{master1   /api/v1/nodes/master1 bc51b401-422a-4e82-b449-caa7cdc72ceb 47527 0 2021-03-11 17:50:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0e:0e:ac:80:fe:e5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 48 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 20:05:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0cb21bb9b8b64bf38523b2f5a8bdad14,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:4a77cc46-4c80-409c-8c40-c24648f76e32,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5617799,},ContainerImage{Names:[alpine@sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be alpine:latest],SizeBytes:5613158,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 20:05:49.047: INFO: 
Logging kubelet events for node master1
Mar 11 20:05:49.049: INFO: 
Logging pods the kubelet thinks is on node master1
Mar 11 20:05:49.065: INFO: kube-proxy-bwz9p started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 20:05:49.065: INFO: kube-flannel-pzw7v started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Init container install-cni ready: true, restart count 2
Mar 11 20:05:49.065: INFO: 	Container kube-flannel ready: true, restart count 1
Mar 11 20:05:49.065: INFO: coredns-59dcc4799b-cp4vq started at 2021-03-11 17:53:08 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container coredns ready: true, restart count 1
Mar 11 20:05:49.065: INFO: kube-controller-manager-master1 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar 11 20:05:49.065: INFO: kube-apiserver-master1 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 20:05:49.065: INFO: kube-multus-ds-amd64-2jdtx started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:05:49.065: INFO: docker-registry-docker-registry-6d4484d8f9-pkjwp started at 2021-03-11 17:55:49 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container docker-registry ready: true, restart count 0
Mar 11 20:05:49.065: INFO: 	Container nginx ready: true, restart count 0
Mar 11 20:05:49.065: INFO: node-exporter-b54mc started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.065: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:05:49.065: INFO: kube-scheduler-master1 started at 2021-03-11 18:07:23 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.065: INFO: 	Container kube-scheduler ready: true, restart count 1
W0311 20:05:49.068777      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:05:49.094: INFO: 
Latency metrics for node master1
Mar 11 20:05:49.094: INFO: 
Logging node info for node master2
Mar 11 20:05:49.097: INFO: Node Info: &Node{ObjectMeta:{master2   /api/v1/nodes/master2 81d12a4f-6154-421a-896a-6071517cc7cf 47523 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8a:67:dc:b1:33:9d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 50 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {kubelet Update v1 2021-03-11 20:05:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234771968 0} {} 196518332Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361914368 0} {} 195665932Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:52:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b3061860c4ba472e9c76577f315c0ddb,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bc6d20a6-057d-4d5d-af80-cb65b29e2a9f,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 20:05:49.097: INFO: 
Logging kubelet events for node master2
Mar 11 20:05:49.099: INFO: 
Logging pods the kubelet thinks is on node master2
Mar 11 20:05:49.113: INFO: kube-controller-manager-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.113: INFO: 	Container kube-controller-manager ready: true, restart count 2
Mar 11 20:05:49.113: INFO: kube-scheduler-master2 started at 2021-03-11 17:57:56 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.113: INFO: 	Container kube-scheduler ready: true, restart count 2
Mar 11 20:05:49.113: INFO: kube-proxy-qg4j5 started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.113: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 20:05:49.113: INFO: kube-flannel-kfjhn started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 20:05:49.113: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 20:05:49.113: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 20:05:49.113: INFO: kube-multus-ds-amd64-xx6h7 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.113: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:05:49.113: INFO: node-exporter-j8bwb started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.114: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.114: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:05:49.114: INFO: kube-apiserver-master2 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.114: INFO: 	Container kube-apiserver ready: true, restart count 0
W0311 20:05:49.118169      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:05:49.141: INFO: 
Latency metrics for node master2
Mar 11 20:05:49.141: INFO: 
Logging node info for node master3
Mar 11 20:05:49.143: INFO: Node Info: &Node{ObjectMeta:{master3   /api/v1/nodes/master3 2ec4f135-9e61-46a6-a537-0ad6199eddb1 47530 0 2021-03-11 17:50:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4e:4a:32:07:d3:68"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-11 17:50:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 110 111 100 101 45 114 111 108 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 34 58 123 125 125 125 125],}} {kube-controller-manager Update v1 2021-03-11 17:52:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 49 46 48 47 50 52 92 34 34 58 123 125 125 44 34 102 58 116 97 105 110 116 115 34 58 123 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 115 116 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 20:05:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234776064 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200361918464 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:50:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:54:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4167bf4cb2634ca88fc2626bbda0ce42,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:52af946c-b482-4940-ad01-ee4a9a06c438,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:41ed47389c835eb68215e8215f6d4bfa5123923afd7550dbae049cded27c41b4 quay.io/coreos/etcd:v3.4.3],SizeBytes:83576774,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[coredns/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c coredns/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:be8875e5584750b7a490244ee56a121a714aa3d124164a5090cd8b3570c5650f k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.8.1],SizeBytes:40684734,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 20:05:49.144: INFO: 
Logging kubelet events for node master3
Mar 11 20:05:49.146: INFO: 
Logging pods the kubelet thinks is on node master3
Mar 11 20:05:49.162: INFO: node-exporter-xgq5j started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.162: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:05:49.162: INFO: kube-apiserver-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 11 20:05:49.162: INFO: kube-controller-manager-master3 started at 2021-03-11 17:54:26 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-controller-manager ready: true, restart count 2
Mar 11 20:05:49.162: INFO: kube-scheduler-master3 started at 2021-03-11 17:51:21 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-scheduler ready: true, restart count 2
Mar 11 20:05:49.162: INFO: node-feature-discovery-controller-ccc948bcc-k5xj8 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container nfd-controller ready: true, restart count 0
Mar 11 20:05:49.162: INFO: coredns-59dcc4799b-cd6w4 started at 2021-03-11 17:53:13 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container coredns ready: true, restart count 2
Mar 11 20:05:49.162: INFO: kube-proxy-ktvzn started at 2021-03-11 17:51:51 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 20:05:49.162: INFO: kube-flannel-fkd4q started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 20:05:49.162: INFO: 	Container kube-flannel ready: true, restart count 1
Mar 11 20:05:49.162: INFO: kube-multus-ds-amd64-94kvc started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:05:49.162: INFO: dns-autoscaler-66498f5c5f-m7mx4 started at 2021-03-11 17:53:11 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.162: INFO: 	Container autoscaler ready: true, restart count 1
W0311 20:05:49.167201      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:05:49.190: INFO: 
Latency metrics for node master3
Mar 11 20:05:49.190: INFO: 
Logging node info for node node1
Mar 11 20:05:49.194: INFO: Node Info: &Node{ObjectMeta:{node1   /api/v1/nodes/node1 09564b93-d658-496c-8cb0-ca1148040536 47524 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"9a:2f:67:81:a9:4b"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 51 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:03:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 20:05:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:32 +0000 UTC,LastTransitionTime:2021-03-11 17:54:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 20:05:39 +0000 UTC,LastTransitionTime:2021-03-11 17:58:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14aafcebb52e4debae4bcb2b7efb6066,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:87cad20c-59df-4889-8b1c-8831f7bcac2e,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[@ :],SizeBytes:1002423280,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:5ae9a5d4f882cae1ddfb3aeb6c5c6645df57e77e3bdaf9083c3cde45c7f9cbc2 golang:alpine3.12],SizeBytes:301038054,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a295107679b0d92cb70145fc18fb53c76e79fceed7e1cf10ed763c7c102c5ebe alpine:3.12],SizeBytes:5577287,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 20:05:49.195: INFO: 
Logging kubelet events for node node1
Mar 11 20:05:49.198: INFO: 
Logging pods the kubelet thinks is on node node1
Mar 11 20:05:49.215: INFO: kube-multus-ds-amd64-gtmmz started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:05:49.215: INFO: node-feature-discovery-worker-nf56t started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 20:05:49.215: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 20:05:49.215: INFO: prometheus-k8s-0 started at 2021-03-11 18:04:37 +0000 UTC (0+5 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container grafana ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 20:05:49.215: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 20:05:49.215: INFO: kube-proxy-5zz5g started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container kube-proxy ready: true, restart count 2
Mar 11 20:05:49.215: INFO: kube-flannel-8pz9c started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 20:05:49.215: INFO: cmk-init-discover-node2-29mrv started at 2021-03-11 18:03:13 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:05:49.215: INFO: 	Container init ready: false, restart count 0
Mar 11 20:05:49.215: INFO: 	Container install ready: false, restart count 0
Mar 11 20:05:49.215: INFO: cmk-webhook-888945845-2gpfq started at 2021-03-11 18:03:34 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 20:05:49.215: INFO: node-exporter-mw629 started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:05:49.215: INFO: collectd-4rvsd started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container collectd ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.215: INFO: nginx-proxy-node1 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 20:05:49.215: INFO: cmk-s6v97 started at 2021-03-11 18:03:34 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.215: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 20:05:49.215: INFO: 	Container reconcile ready: true, restart count 0
W0311 20:05:49.219296      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:05:49.251: INFO: 
Latency metrics for node node1
Mar 11 20:05:49.251: INFO: 
Logging node info for node node2
Mar 11 20:05:49.254: INFO: Node Info: &Node{ObjectMeta:{node2   /api/v1/nodes/node2 48280382-daca-4d2c-a30b-cd693b7dd3e5 47544 0 2021-03-11 17:51:58 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.15.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"02:6c:14:b4:02:16"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor nfd.node.kubernetes.io/worker.version:v0.5.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-03-11 17:51:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 111 100 101 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 116 116 108 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 100 67 73 68 82 34 58 123 125 44 34 102 58 112 111 100 67 73 68 82 115 34 58 123 34 46 34 58 123 125 44 34 118 58 92 34 49 48 46 50 52 52 46 52 46 48 47 50 52 92 34 34 58 123 125 125 125 125],}} {kubeadm Update v1 2021-03-11 17:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 107 117 98 101 97 100 109 46 97 108 112 104 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 114 105 45 115 111 99 107 101 116 34 58 123 125 125 125 125],}} {flanneld Update v1 2021-03-11 17:54:35 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 100 97 116 97 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 98 97 99 107 101 110 100 45 116 121 112 101 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 107 117 98 101 45 115 117 98 110 101 116 45 109 97 110 97 103 101 114 34 58 123 125 44 34 102 58 102 108 97 110 110 101 108 46 97 108 112 104 97 46 99 111 114 101 111 115 46 99 111 109 47 112 117 98 108 105 99 45 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 78 101 116 119 111 114 107 85 110 97 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 125 125],}} {nfd-master Update v1 2021-03-11 17:59:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 102 101 97 116 117 114 101 45 108 97 98 101 108 115 34 58 123 125 44 34 102 58 110 102 100 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 119 111 114 107 101 114 46 118 101 114 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 68 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 69 83 78 73 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 50 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 66 87 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 67 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 68 81 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 70 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 65 86 88 53 49 50 86 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 70 77 65 51 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 72 76 69 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 73 66 80 66 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 77 80 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 82 84 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 83 84 73 66 80 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 99 112 117 105 100 46 86 77 88 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 104 97 114 100 119 97 114 101 95 109 117 108 116 105 116 104 114 101 97 100 105 110 103 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 112 115 116 97 116 101 46 116 117 114 98 111 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 67 77 84 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 76 51 67 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 65 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 66 77 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 112 117 45 114 100 116 46 82 68 84 77 79 78 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 99 111 110 102 105 103 46 78 79 95 72 90 95 70 85 76 76 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 115 101 108 105 110 117 120 46 101 110 97 98 108 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 102 117 108 108 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 109 105 110 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 107 101 114 110 101 108 45 118 101 114 115 105 111 110 46 114 101 118 105 115 105 111 110 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 101 109 111 114 121 45 110 117 109 97 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 97 112 97 98 108 101 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 110 101 116 119 111 114 107 45 115 114 105 111 118 46 99 111 110 102 105 103 117 114 101 100 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 112 99 105 45 48 51 48 48 95 49 97 48 51 46 112 114 101 115 101 110 116 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 110 111 110 114 111 116 97 116 105 111 110 97 108 100 105 115 107 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 97 106 111 114 34 58 123 125 44 34 102 58 102 101 97 116 117 114 101 46 110 111 100 101 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 121 115 116 101 109 45 111 115 95 114 101 108 101 97 115 101 46 86 69 82 83 73 79 78 95 73 68 46 109 105 110 111 114 34 58 123 125 125 125 125],}} {Swagger-Codegen Update v1 2021-03-11 18:01:45 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 99 109 107 45 110 111 100 101 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 125 125 125],}} {kubelet Update v1 2021-03-11 20:05:40 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 97 114 99 104 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 104 111 115 116 110 97 109 101 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 111 115 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 100 100 114 101 115 115 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 72 111 115 116 110 97 109 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 116 101 114 110 97 108 73 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 114 101 115 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 97 108 108 111 99 97 116 97 98 108 101 34 58 123 34 46 34 58 123 125 44 34 102 58 99 109 107 46 105 110 116 101 108 46 99 111 109 47 101 120 99 108 117 115 105 118 101 45 99 111 114 101 115 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 97 112 97 99 105 116 121 34 58 123 34 46 34 58 123 125 44 34 102 58 99 112 117 34 58 123 125 44 34 102 58 101 112 104 101 109 101 114 97 108 45 115 116 111 114 97 103 101 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 49 71 105 34 58 123 125 44 34 102 58 104 117 103 101 112 97 103 101 115 45 50 77 105 34 58 123 125 44 34 102 58 105 110 116 101 108 46 99 111 109 47 105 110 116 101 108 95 115 114 105 111 118 95 110 101 116 100 101 118 105 99 101 34 58 123 125 44 34 102 58 109 101 109 111 114 121 34 58 123 125 44 34 102 58 112 111 100 115 34 58 123 125 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 68 105 115 107 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 77 101 109 111 114 121 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 73 68 80 114 101 115 115 117 114 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 72 101 97 114 116 98 101 97 116 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 100 97 101 109 111 110 69 110 100 112 111 105 110 116 115 34 58 123 34 102 58 107 117 98 101 108 101 116 69 110 100 112 111 105 110 116 34 58 123 34 102 58 80 111 114 116 34 58 123 125 125 125 44 34 102 58 105 109 97 103 101 115 34 58 123 125 44 34 102 58 110 111 100 101 73 110 102 111 34 58 123 34 102 58 97 114 99 104 105 116 101 99 116 117 114 101 34 58 123 125 44 34 102 58 98 111 111 116 73 68 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 82 117 110 116 105 109 101 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 101 114 110 101 108 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 80 114 111 120 121 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 107 117 98 101 108 101 116 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 109 97 99 104 105 110 101 73 68 34 58 123 125 44 34 102 58 111 112 101 114 97 116 105 110 103 83 121 115 116 101 109 34 58 123 125 44 34 102 58 111 115 73 109 97 103 101 34 58 123 125 44 34 102 58 115 121 115 116 101 109 85 85 73 68 34 58 123 125 125 125 125],}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201259671552 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178911977472 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-03-11 17:54:35 +0000 UTC,LastTransitionTime:2021-03-11 17:54:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:51:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-11 20:05:40 +0000 UTC,LastTransitionTime:2021-03-11 17:58:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:08627116483a4bf79f59d79a4a11d6f4,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:1be00882-edae-44a0-a65e-9f92c05d8856,KernelVersion:3.10.0-1160.15.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.12,KubeletVersion:v1.18.8,KubeProxyVersion:v1.18.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:18abffcf9afb2c3cb0afac67de5f1317f7dcd8925906c434f4e18812d9efbb54],SizeBytes:1727353823,},ContainerImage{Names:[localhost:30500/cmk@sha256:fdd523af421b0b21e1d9a0699b629bc50687a7de7dcea78afe470b8eaeed4ae2 localhost:30500/cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726480407,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:38cd3fe450dcded05650b49cd4c95b41fce97503892b5b760e9395d127bdf276 kubernetesui/dashboard-amd64:v2.0.2],SizeBytes:224634189,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:173029757,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:162425213,},ContainerImage{Names:[nginx@sha256:f3693fe50d5b1df1ecd315d54813a77afd56b0245a404055a946574deb6b34fc nginx:1.19],SizeBytes:133050457,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:117264685,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:111705925,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:95308157,},ContainerImage{Names:[quay.io/kubernetes_incubator/node-feature-discovery@sha256:99fe53b4555e717de68505ec46a10bc0e19c5e0d998fde5035bb623a65c75916 quay.io/kubernetes_incubator/node-feature-discovery:v0.5.0],SizeBytes:86455274,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:6d451d92c921f14bfb38196aacb6e506d4593c5b3c9d40a8b8a2506010dc3e10 quay.io/coreos/flannel:v0.12.0],SizeBytes:52767393,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[lachlanevenson/k8s-helm@sha256:dcf2036282694c6bfe2a533636dd8f494f63186b98e4118e741be04a9115af6a lachlanevenson/k8s-helm:v3.2.3],SizeBytes:46479395,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:0ebc8fa00465a6b16bda934a7e3c12e008aa2ed9d9e2ae31d3faca0ab94ada86 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44376083,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:a7f6555decef3c061cfb669be5137d2209690cafe459204126e01276f113b9af kubernetesui/metrics-scraper:v1.0.5],SizeBytes:36703493,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:0a63703fc308c6cb4207a707146ef234ff92011ee350289beec821e9a2c42765 localhost:30500/tas-controller:0.1],SizeBytes:23811271,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:96cd5db59860a84139d8d35c2e7662504a7c6cba7810831ed9374e0ddd9b1333 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 11 20:05:49.255: INFO: 
Logging kubelet events for node node2
Mar 11 20:05:49.257: INFO: 
Logging pods the kubelet thinks is on node node2
Mar 11 20:05:49.276: INFO: cmk-init-discover-node2-9knwq started at 2021-03-11 18:02:23 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container init ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container install ready: false, restart count 0
Mar 11 20:05:49.276: INFO: kubernetes-dashboard-57777fbdcb-zsnff started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Mar 11 20:05:49.276: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 started at 2021-03-11 18:00:01 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 20:05:49.276: INFO: cmk-init-discover-node1-vk7wm started at 2021-03-11 18:01:40 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container init ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container install ready: false, restart count 0
Mar 11 20:05:49.276: INFO: nginx-proxy-node2 started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 20:05:49.276: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp started at 2021-03-11 17:53:12 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Mar 11 20:05:49.276: INFO: collectd-86ww6 started at 2021-03-11 18:07:58 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container collectd ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.276: INFO: kube-flannel-8wwvj started at 2021-03-11 17:52:37 +0000 UTC (1+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Init container install-cni ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 20:05:49.276: INFO: node-exporter-x6vqx started at 2021-03-11 18:04:28 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:05:49.276: INFO: node-feature-discovery-worker-8xdg7 started at 2021-03-11 17:58:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 20:05:49.276: INFO: kube-multus-ds-amd64-rpm89 started at 2021-03-11 17:52:47 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:05:49.276: INFO: prometheus-operator-f66f5fb4d-f2pkm started at 2021-03-11 18:04:21 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container prometheus-operator ready: true, restart count 0
Mar 11 20:05:49.276: INFO: cmk-init-discover-node2-qbc6m started at 2021-03-11 18:02:53 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container init ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container install ready: false, restart count 0
Mar 11 20:05:49.276: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz started at 2021-03-11 18:07:22 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 20:05:49.276: INFO: kube-proxy-znx8n started at 2021-03-11 17:51:59 +0000 UTC (0+1 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 20:05:49.276: INFO: cmk-init-discover-node2-c5j6h started at 2021-03-11 18:02:02 +0000 UTC (0+3 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container init ready: false, restart count 0
Mar 11 20:05:49.276: INFO: 	Container install ready: false, restart count 0
Mar 11 20:05:49.276: INFO: cmk-slzjv started at 2021-03-11 18:03:33 +0000 UTC (0+2 container statuses recorded)
Mar 11 20:05:49.276: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 20:05:49.276: INFO: 	Container reconcile ready: true, restart count 0
W0311 20:05:49.280855      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:05:49.319: INFO: 
Latency metrics for node node2
Mar 11 20:05:49.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9811" for this suite.

• Failure [7.595 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

  Mar 11 20:05:49.034: failed to create random CR {"kind":"E2e-test-crd-publish-openapi-7231-crd","apiVersion":"crd-publish-openapi-test-unknown-in-nested.example.com/v1","metadata":{"name":"test-cr"},"spec":{"a":null,"b":[{"c":"d"}]}} for CRD that allows unknown properties in a nested object: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9811 create -f -:
  Command stdout:
  
  stderr:
  error: error validating "STDIN": error validating data: unknown object type "nil" in E2e-test-crd-publish-openapi-7231-crd.spec.a; if you choose to ignore these errors, turn validation off with --validate=false
  
  error:
  exit status 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:235
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":232,"skipped":3966,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:05:49.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4874
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:06:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4874" for this suite.

• [SLOW TEST:17.172 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":233,"skipped":3972,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:06:06.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-331
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 11 20:06:06.634: INFO: Waiting up to 5m0s for pod "pod-2f8e2af4-ad54-4e32-8e5d-062949c51111" in namespace "emptydir-331" to be "Succeeded or Failed"
Mar 11 20:06:06.636: INFO: Pod "pod-2f8e2af4-ad54-4e32-8e5d-062949c51111": Phase="Pending", Reason="", readiness=false. Elapsed: 1.965107ms
Mar 11 20:06:08.643: INFO: Pod "pod-2f8e2af4-ad54-4e32-8e5d-062949c51111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008970091s
Mar 11 20:06:10.648: INFO: Pod "pod-2f8e2af4-ad54-4e32-8e5d-062949c51111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014732163s
STEP: Saw pod success
Mar 11 20:06:10.648: INFO: Pod "pod-2f8e2af4-ad54-4e32-8e5d-062949c51111" satisfied condition "Succeeded or Failed"
Mar 11 20:06:10.650: INFO: Trying to get logs from node node2 pod pod-2f8e2af4-ad54-4e32-8e5d-062949c51111 container test-container: 
STEP: delete the pod
Mar 11 20:06:10.665: INFO: Waiting for pod pod-2f8e2af4-ad54-4e32-8e5d-062949c51111 to disappear
Mar 11 20:06:10.667: INFO: Pod pod-2f8e2af4-ad54-4e32-8e5d-062949c51111 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:06:10.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-331" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3986,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:06:10.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-2478
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-a000cddd-23b5-4b1f-aad6-c0124d95d1ab
STEP: Creating configMap with name cm-test-opt-upd-b8fcdbf6-fffc-43c9-9cd4-21c54ce559b0
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a000cddd-23b5-4b1f-aad6-c0124d95d1ab
STEP: Updating configmap cm-test-opt-upd-b8fcdbf6-fffc-43c9-9cd4-21c54ce559b0
STEP: Creating configMap with name cm-test-opt-create-b90484a8-6f1e-40f4-8748-915cbaab2050
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:06:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2478" for this suite.

• [SLOW TEST:6.215 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4062,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:06:16.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4871
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:06:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4871" for this suite.

• [SLOW TEST:13.197 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":236,"skipped":4064,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:06:30.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-7955
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-796w
STEP: Creating a pod to test atomic-volume-subpath
Mar 11 20:06:30.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-796w" in namespace "subpath-7955" to be "Succeeded or Failed"
Mar 11 20:06:30.231: INFO: Pod "pod-subpath-test-projected-796w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.689445ms
Mar 11 20:06:32.236: INFO: Pod "pod-subpath-test-projected-796w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008533868s
Mar 11 20:06:34.240: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 4.011978333s
Mar 11 20:06:36.244: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 6.015837865s
Mar 11 20:06:38.249: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 8.021045757s
Mar 11 20:06:40.254: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 10.02565367s
Mar 11 20:06:42.257: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 12.029140222s
Mar 11 20:06:44.262: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 14.034476067s
Mar 11 20:06:46.265: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 16.037607602s
Mar 11 20:06:48.270: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 18.042243444s
Mar 11 20:06:50.274: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 20.045695164s
Mar 11 20:06:52.277: INFO: Pod "pod-subpath-test-projected-796w": Phase="Running", Reason="", readiness=true. Elapsed: 22.049414579s
Mar 11 20:06:54.282: INFO: Pod "pod-subpath-test-projected-796w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054410969s
STEP: Saw pod success
Mar 11 20:06:54.282: INFO: Pod "pod-subpath-test-projected-796w" satisfied condition "Succeeded or Failed"
Mar 11 20:06:54.286: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-796w container test-container-subpath-projected-796w: 
STEP: delete the pod
Mar 11 20:06:54.302: INFO: Waiting for pod pod-subpath-test-projected-796w to disappear
Mar 11 20:06:54.304: INFO: Pod pod-subpath-test-projected-796w no longer exists
STEP: Deleting pod pod-subpath-test-projected-796w
Mar 11 20:06:54.304: INFO: Deleting pod "pod-subpath-test-projected-796w" in namespace "subpath-7955"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:06:54.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7955" for this suite.

• [SLOW TEST:24.229 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":237,"skipped":4066,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:06:54.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4581
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Mar 11 20:06:54.446: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar 11 20:07:05.488: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:05.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4581" for this suite.

• [SLOW TEST:11.181 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4071,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:05.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9014
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-99554467-76f7-4f70-9bdc-9753efe777eb
STEP: Creating a pod to test consume secrets
Mar 11 20:07:05.639: INFO: Waiting up to 5m0s for pod "pod-secrets-fe19a460-3ab6-482c-9839-693698b48225" in namespace "secrets-9014" to be "Succeeded or Failed"
Mar 11 20:07:05.641: INFO: Pod "pod-secrets-fe19a460-3ab6-482c-9839-693698b48225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436443ms
Mar 11 20:07:07.646: INFO: Pod "pod-secrets-fe19a460-3ab6-482c-9839-693698b48225": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006782359s
Mar 11 20:07:09.649: INFO: Pod "pod-secrets-fe19a460-3ab6-482c-9839-693698b48225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009936557s
STEP: Saw pod success
Mar 11 20:07:09.649: INFO: Pod "pod-secrets-fe19a460-3ab6-482c-9839-693698b48225" satisfied condition "Succeeded or Failed"
Mar 11 20:07:09.651: INFO: Trying to get logs from node node1 pod pod-secrets-fe19a460-3ab6-482c-9839-693698b48225 container secret-volume-test: 
STEP: delete the pod
Mar 11 20:07:09.667: INFO: Waiting for pod pod-secrets-fe19a460-3ab6-482c-9839-693698b48225 to disappear
Mar 11 20:07:09.669: INFO: Pod pod-secrets-fe19a460-3ab6-482c-9839-693698b48225 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:09.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9014" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4071,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:09.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3777
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Mar 11 20:07:09.799: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:09.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3777" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":240,"skipped":4082,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:09.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-9502
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Mar 11 20:07:10.437: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Mar 11 20:07:12.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090030, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090030, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090030, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090030, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 20:07:15.460: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:07:15.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:21.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9502" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:11.704 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":241,"skipped":4136,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:21.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-4971
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar 11 20:07:29.804: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 11 20:07:29.806: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 11 20:07:31.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 11 20:07:31.809: INFO: Pod pod-with-poststart-exec-hook still exists
Mar 11 20:07:33.807: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 11 20:07:33.810: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:33.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4971" for this suite.

• [SLOW TEST:12.193 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4143,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:33.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9642
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 11 20:07:33.957: INFO: Waiting up to 5m0s for pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e" in namespace "downward-api-9642" to be "Succeeded or Failed"
Mar 11 20:07:33.959: INFO: Pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506575ms
Mar 11 20:07:35.966: INFO: Pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009719024s
Mar 11 20:07:37.971: INFO: Pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014861759s
Mar 11 20:07:39.976: INFO: Pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019347477s
STEP: Saw pod success
Mar 11 20:07:39.976: INFO: Pod "downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e" satisfied condition "Succeeded or Failed"
Mar 11 20:07:39.979: INFO: Trying to get logs from node node2 pod downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e container dapi-container: 
STEP: delete the pod
Mar 11 20:07:39.994: INFO: Waiting for pod downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e to disappear
Mar 11 20:07:39.996: INFO: Pod downward-api-43173211-8b79-4a71-a7dd-f17c21843d1e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:39.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9642" for this suite.

• [SLOW TEST:6.185 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4159,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:40.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8677
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0311 20:07:46.157311      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:07:46.157: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:46.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8677" for this suite.

• [SLOW TEST:6.161 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":244,"skipped":4173,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:46.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-7490
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Mar 11 20:07:46.796: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Mar 11 20:07:48.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 11 20:07:50.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090066, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 20:07:53.817: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:07:53.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:07:59.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7490" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:13.768 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":245,"skipped":4228,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:07:59.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-5855
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 11 20:08:00.073: INFO: Waiting up to 5m0s for pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7" in namespace "downward-api-5855" to be "Succeeded or Failed"
Mar 11 20:08:00.075: INFO: Pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.920533ms
Mar 11 20:08:02.078: INFO: Pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005006817s
Mar 11 20:08:04.083: INFO: Pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009395943s
Mar 11 20:08:06.086: INFO: Pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012429374s
STEP: Saw pod success
Mar 11 20:08:06.086: INFO: Pod "downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7" satisfied condition "Succeeded or Failed"
Mar 11 20:08:06.088: INFO: Trying to get logs from node node1 pod downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7 container dapi-container: 
STEP: delete the pod
Mar 11 20:08:06.103: INFO: Waiting for pod downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7 to disappear
Mar 11 20:08:06.105: INFO: Pod downward-api-c08e5e6c-9b8e-4b8f-9b49-6565caf0fbf7 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:08:06.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5855" for this suite.

• [SLOW TEST:6.178 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4248,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:08:06.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1476
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0311 20:08:16.267143      12 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 11 20:08:16.267: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:08:16.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1476" for this suite.

• [SLOW TEST:10.161 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":247,"skipped":4251,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:08:16.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2158
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-49c3312e-c255-4d0a-9766-0f93fae1c95a
STEP: Creating configMap with name cm-test-opt-upd-6356b8c3-6669-42f5-a18a-95555ee8a174
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-49c3312e-c255-4d0a-9766-0f93fae1c95a
STEP: Updating configmap cm-test-opt-upd-6356b8c3-6669-42f5-a18a-95555ee8a174
STEP: Creating configMap with name cm-test-opt-create-23599575-df5f-410f-ad1c-1338394973b4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:08:24.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2158" for this suite.

• [SLOW TEST:8.214 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4263,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:08:24.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5555
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 20:08:24.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 20:08:26.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090104, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090104, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090104, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090104, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 20:08:29.918: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:08:29.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5555" for this suite.
STEP: Destroying namespace "webhook-5555-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.483 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":249,"skipped":4269,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:08:29.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-3548
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Mar 11 20:08:30.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49039 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:08:30.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49039 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Mar 11 20:08:40.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49102 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:08:40.109: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49102 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Mar 11 20:08:50.117: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49140 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:08:50.117: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49140 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Mar 11 20:09:00.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49180 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:09:00.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-a 026ddf53-f2d5-458d-b617-66a83e0dd1b9 49180 0 2021-03-11 20:08:30 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2021-03-11 20:08:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Mar 11 20:09:10.128: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-b 3d766976-57f3-4614-afe2-dec97fdc2d66 49214 0 2021-03-11 20:09:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-03-11 20:09:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:09:10.129: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-b 3d766976-57f3-4614-afe2-dec97fdc2d66 49214 0 2021-03-11 20:09:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-03-11 20:09:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Mar 11 20:09:20.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-b 3d766976-57f3-4614-afe2-dec97fdc2d66 49244 0 2021-03-11 20:09:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-03-11 20:09:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 11 20:09:20.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-configmap-b 3d766976-57f3-4614-afe2-dec97fdc2d66 49244 0 2021-03-11 20:09:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2021-03-11 20:09:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:09:30.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3548" for this suite.

• [SLOW TEST:60.177 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":250,"skipped":4278,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:09:30.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5383
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Mar 11 20:09:34.295: INFO: Pod pod-hostip-a7c0f8ff-9c51-4064-af81-af8963c0fa5f has hostIP: 10.10.190.208
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:09:34.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5383" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4286,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:09:34.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-3336
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-3336
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 11 20:09:34.429: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 11 20:09:34.463: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 20:09:36.467: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 20:09:38.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 20:09:40.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 20:09:42.466: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 20:09:44.466: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 20:09:46.468: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 11 20:09:48.466: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 11 20:09:48.471: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 11 20:09:52.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.215:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3336 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:09:52.508: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:09:52.621: INFO: Found all expected endpoints: [netserver-0]
Mar 11 20:09:52.625: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.221:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3336 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:09:52.625: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:09:52.729: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:09:52.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3336" for this suite.

• [SLOW TEST:18.434 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4314,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:09:52.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5669
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-7283ff87-3a0e-4aa9-a914-6a9f2a133133
STEP: Creating secret with name s-test-opt-upd-744c9070-0d92-4f3c-b645-1b56d27c1dbb
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7283ff87-3a0e-4aa9-a914-6a9f2a133133
STEP: Updating secret s-test-opt-upd-744c9070-0d92-4f3c-b645-1b56d27c1dbb
STEP: Creating secret with name s-test-opt-create-8b49f5b5-519d-4704-a571-fbc8b74b780d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:00.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5669" for this suite.

• [SLOW TEST:8.229 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4323,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:00.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-836
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-836
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-836
STEP: Deleting pre-stop pod
Mar 11 20:10:16.142: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:16.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-836" for this suite.

• [SLOW TEST:15.186 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":254,"skipped":4333,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:16.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6757
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 20:10:16.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974" in namespace "projected-6757" to be "Succeeded or Failed"
Mar 11 20:10:16.290: INFO: Pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190077ms
Mar 11 20:10:18.294: INFO: Pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006326441s
Mar 11 20:10:20.298: INFO: Pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011054495s
Mar 11 20:10:22.302: INFO: Pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014333186s
STEP: Saw pod success
Mar 11 20:10:22.302: INFO: Pod "downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974" satisfied condition "Succeeded or Failed"
Mar 11 20:10:22.305: INFO: Trying to get logs from node node1 pod downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974 container client-container: 
STEP: delete the pod
Mar 11 20:10:22.343: INFO: Waiting for pod downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974 to disappear
Mar 11 20:10:22.347: INFO: Pod downwardapi-volume-ef4af3c5-271e-4e5b-9865-b69819e75974 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:22.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6757" for this suite.

• [SLOW TEST:6.199 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4359,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:22.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7149
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-087d6f41-581d-4ba3-ac10-688598083a91-6194
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:22.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7149" for this suite.
STEP: Destroying namespace "nspatchtest-087d6f41-581d-4ba3-ac10-688598083a91-6194" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":256,"skipped":4359,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:22.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-6617
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Mar 11 20:10:22.743: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar 11 20:10:22.755: INFO: Waiting for terminating namespaces to be deleted...
Mar 11 20:10:22.758: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Mar 11 20:10:22.770: INFO: nginx-proxy-node1 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 20:10:22.770: INFO: cmk-s6v97 from kube-system started at 2021-03-11 18:03:34 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 20:10:22.770: INFO: kube-multus-ds-amd64-gtmmz from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:10:22.770: INFO: node-feature-discovery-worker-nf56t from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 20:10:22.770: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vf8xv from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 20:10:22.770: INFO: prometheus-k8s-0 from monitoring started at 2021-03-11 18:04:37 +0000 UTC (5 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container grafana ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container prometheus ready: true, restart count 1
Mar 11 20:10:22.770: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
Mar 11 20:10:22.770: INFO: kube-proxy-5zz5g from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container kube-proxy ready: true, restart count 2
Mar 11 20:10:22.770: INFO: kube-flannel-8pz9c from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 20:10:22.770: INFO: cmk-init-discover-node2-29mrv from kube-system started at 2021-03-11 18:03:13 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:10:22.770: INFO: 	Container init ready: false, restart count 0
Mar 11 20:10:22.770: INFO: 	Container install ready: false, restart count 0
Mar 11 20:10:22.770: INFO: cmk-webhook-888945845-2gpfq from kube-system started at 2021-03-11 18:03:34 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container cmk-webhook ready: true, restart count 0
Mar 11 20:10:22.770: INFO: node-exporter-mw629 from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:10:22.770: INFO: collectd-4rvsd from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.770: INFO: 	Container collectd ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 20:10:22.770: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Mar 11 20:10:22.793: INFO: kube-flannel-8wwvj from kube-system started at 2021-03-11 17:52:37 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-flannel ready: true, restart count 2
Mar 11 20:10:22.793: INFO: node-exporter-x6vqx from monitoring started at 2021-03-11 18:04:28 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container node-exporter ready: true, restart count 0
Mar 11 20:10:22.793: INFO: collectd-86ww6 from monitoring started at 2021-03-11 18:07:58 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container collectd ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container collectd-exporter ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container rbac-proxy ready: true, restart count 0
Mar 11 20:10:22.793: INFO: node-feature-discovery-worker-8xdg7 from kube-system started at 2021-03-11 17:58:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container nfd-worker ready: true, restart count 0
Mar 11 20:10:22.793: INFO: kube-multus-ds-amd64-rpm89 from kube-system started at 2021-03-11 17:52:47 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-multus ready: true, restart count 1
Mar 11 20:10:22.793: INFO: prometheus-operator-f66f5fb4d-f2pkm from monitoring started at 2021-03-11 18:04:21 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container prometheus-operator ready: true, restart count 0
Mar 11 20:10:22.793: INFO: tester from prestop-836 started at 2021-03-11 20:10:05 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container tester ready: true, restart count 0
Mar 11 20:10:22.793: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-wqfmz from monitoring started at 2021-03-11 18:07:22 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container tas-controller ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container tas-extender ready: true, restart count 0
Mar 11 20:10:22.793: INFO: kube-proxy-znx8n from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-proxy ready: true, restart count 1
Mar 11 20:10:22.793: INFO: cmk-init-discover-node2-c5j6h from kube-system started at 2021-03-11 18:02:02 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container init ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container install ready: false, restart count 0
Mar 11 20:10:22.793: INFO: cmk-init-discover-node2-qbc6m from kube-system started at 2021-03-11 18:02:53 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container init ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container install ready: false, restart count 0
Mar 11 20:10:22.793: INFO: cmk-slzjv from kube-system started at 2021-03-11 18:03:33 +0000 UTC (2 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container nodereport ready: true, restart count 0
Mar 11 20:10:22.793: INFO: 	Container reconcile ready: true, restart count 0
Mar 11 20:10:22.793: INFO: kubernetes-dashboard-57777fbdcb-zsnff from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Mar 11 20:10:22.793: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-ptgh4 from kube-system started at 2021-03-11 18:00:01 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kube-sriovdp ready: true, restart count 0
Mar 11 20:10:22.793: INFO: cmk-init-discover-node2-9knwq from kube-system started at 2021-03-11 18:02:23 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container init ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container install ready: false, restart count 0
Mar 11 20:10:22.793: INFO: cmk-init-discover-node1-vk7wm from kube-system started at 2021-03-11 18:01:40 +0000 UTC (3 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container discover ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container init ready: false, restart count 0
Mar 11 20:10:22.793: INFO: 	Container install ready: false, restart count 0
Mar 11 20:10:22.793: INFO: nginx-proxy-node2 from kube-system started at 2021-03-11 17:51:59 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container nginx-proxy ready: true, restart count 2
Mar 11 20:10:22.793: INFO: kubernetes-metrics-scraper-54fbb4d595-dq4gp from kube-system started at 2021-03-11 17:53:12 +0000 UTC (1 container statuses recorded)
Mar 11 20:10:22.793: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2180f3cb-ff43-458d-8c81-73571a35bca5 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-2180f3cb-ff43-458d-8c81-73571a35bca5 off the node node1
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2180f3cb-ff43-458d-8c81-73571a35bca5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:30.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6617" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:8.252 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":257,"skipped":4408,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:30.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2491
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 20:10:31.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3" in namespace "projected-2491" to be "Succeeded or Failed"
Mar 11 20:10:31.010: INFO: Pod "downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447901ms
Mar 11 20:10:33.014: INFO: Pod "downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005742648s
Mar 11 20:10:35.018: INFO: Pod "downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010497001s
STEP: Saw pod success
Mar 11 20:10:35.018: INFO: Pod "downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3" satisfied condition "Succeeded or Failed"
Mar 11 20:10:35.021: INFO: Trying to get logs from node node1 pod downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3 container client-container: 
STEP: delete the pod
Mar 11 20:10:35.035: INFO: Waiting for pod downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3 to disappear
Mar 11 20:10:35.037: INFO: Pod downwardapi-volume-d0975103-9230-4fb8-928a-644af6af31c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:35.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2491" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4445,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:35.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-307
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 11 20:10:35.177: INFO: Waiting up to 5m0s for pod "pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee" in namespace "emptydir-307" to be "Succeeded or Failed"
Mar 11 20:10:35.180: INFO: Pod "pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222255ms
Mar 11 20:10:37.183: INFO: Pod "pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006006259s
Mar 11 20:10:39.187: INFO: Pod "pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010163087s
STEP: Saw pod success
Mar 11 20:10:39.188: INFO: Pod "pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee" satisfied condition "Succeeded or Failed"
Mar 11 20:10:39.190: INFO: Trying to get logs from node node1 pod pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee container test-container: 
STEP: delete the pod
Mar 11 20:10:39.202: INFO: Waiting for pod pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee to disappear
Mar 11 20:10:39.204: INFO: Pod pod-e63877a8-710b-4c9d-a9ca-6d71be9aeeee no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:39.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-307" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4457,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:39.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2375
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:39.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2375" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":260,"skipped":4458,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:39.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-7440
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Mar 11 20:10:47.533: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:47.533: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:47.647: INFO: Exec stderr: ""
Mar 11 20:10:47.647: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:47.647: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:47.747: INFO: Exec stderr: ""
Mar 11 20:10:47.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:47.747: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:47.857: INFO: Exec stderr: ""
Mar 11 20:10:47.857: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:47.857: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:47.957: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Mar 11 20:10:47.957: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:47.957: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.054: INFO: Exec stderr: ""
Mar 11 20:10:48.054: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:48.054: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.150: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Mar 11 20:10:48.150: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:48.150: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.265: INFO: Exec stderr: ""
Mar 11 20:10:48.265: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:48.265: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.377: INFO: Exec stderr: ""
Mar 11 20:10:48.377: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:48.377: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.485: INFO: Exec stderr: ""
Mar 11 20:10:48.485: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7440 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 11 20:10:48.485: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:10:48.596: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:48.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7440" for this suite.

• [SLOW TEST:9.240 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4483,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:48.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-197
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7a740e4a-cf4c-4e8d-96a8-8c0964e9a286
STEP: Creating a pod to test consume secrets
Mar 11 20:10:48.745: INFO: Waiting up to 5m0s for pod "pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9" in namespace "secrets-197" to be "Succeeded or Failed"
Mar 11 20:10:48.748: INFO: Pod "pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595517ms
Mar 11 20:10:50.753: INFO: Pod "pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008052855s
Mar 11 20:10:52.759: INFO: Pod "pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013496439s
STEP: Saw pod success
Mar 11 20:10:52.759: INFO: Pod "pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9" satisfied condition "Succeeded or Failed"
Mar 11 20:10:52.762: INFO: Trying to get logs from node node2 pod pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9 container secret-volume-test: 
STEP: delete the pod
Mar 11 20:10:52.774: INFO: Waiting for pod pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9 to disappear
Mar 11 20:10:52.776: INFO: Pod pod-secrets-5962b96f-df29-4f90-bc8a-c0abfbad0ef9 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:10:52.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-197" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4498,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:10:52.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8201
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-02acc44f-3fc5-4d73-b8ea-f4764404735d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-02acc44f-3fc5-4d73-b8ea-f4764404735d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:12:03.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8201" for this suite.

• [SLOW TEST:70.244 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4521,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:12:03.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6900
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 11 20:12:03.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630" in namespace "projected-6900" to be "Succeeded or Failed"
Mar 11 20:12:03.165: INFO: Pod "downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500574ms
Mar 11 20:12:05.168: INFO: Pod "downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005634699s
Mar 11 20:12:07.171: INFO: Pod "downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008968573s
STEP: Saw pod success
Mar 11 20:12:07.171: INFO: Pod "downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630" satisfied condition "Succeeded or Failed"
Mar 11 20:12:07.173: INFO: Trying to get logs from node node1 pod downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630 container client-container: 
STEP: delete the pod
Mar 11 20:12:07.187: INFO: Waiting for pod downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630 to disappear
Mar 11 20:12:07.190: INFO: Pod downwardapi-volume-c3df1f06-0da2-4054-ad4d-e7137c3df630 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:12:07.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6900" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4526,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:12:07.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4824
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:12:40.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4824" for this suite.

• [SLOW TEST:33.359 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4553,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:12:40.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4292
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-jx4k
STEP: Creating a pod to test atomic-volume-subpath
Mar 11 20:12:40.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jx4k" in namespace "subpath-4292" to be "Succeeded or Failed"
Mar 11 20:12:40.699: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056488ms
Mar 11 20:12:42.703: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006239922s
Mar 11 20:12:44.707: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 4.009798498s
Mar 11 20:12:46.710: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 6.013493662s
Mar 11 20:12:48.716: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 8.019037875s
Mar 11 20:12:50.720: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 10.022911276s
Mar 11 20:12:52.726: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 12.028707618s
Mar 11 20:12:54.729: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 14.031857996s
Mar 11 20:12:56.733: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 16.035678453s
Mar 11 20:12:58.739: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 18.041816466s
Mar 11 20:13:00.744: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 20.047497135s
Mar 11 20:13:02.750: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Running", Reason="", readiness=true. Elapsed: 22.052980977s
Mar 11 20:13:04.754: INFO: Pod "pod-subpath-test-configmap-jx4k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056678385s
STEP: Saw pod success
Mar 11 20:13:04.754: INFO: Pod "pod-subpath-test-configmap-jx4k" satisfied condition "Succeeded or Failed"
Mar 11 20:13:04.756: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-jx4k container test-container-subpath-configmap-jx4k: 
STEP: delete the pod
Mar 11 20:13:04.778: INFO: Waiting for pod pod-subpath-test-configmap-jx4k to disappear
Mar 11 20:13:04.780: INFO: Pod pod-subpath-test-configmap-jx4k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jx4k
Mar 11 20:13:04.780: INFO: Deleting pod "pod-subpath-test-configmap-jx4k" in namespace "subpath-4292"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:13:04.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4292" for this suite.

• [SLOW TEST:24.232 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":266,"skipped":4567,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:13:04.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-9945
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2149c6b5-0933-4fc1-8e04-31e107605034
STEP: Creating a pod to test consume configMaps
Mar 11 20:13:04.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587" in namespace "configmap-9945" to be "Succeeded or Failed"
Mar 11 20:13:04.933: INFO: Pod "pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473978ms
Mar 11 20:13:06.939: INFO: Pod "pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008911547s
Mar 11 20:13:08.945: INFO: Pod "pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014598621s
STEP: Saw pod success
Mar 11 20:13:08.945: INFO: Pod "pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587" satisfied condition "Succeeded or Failed"
Mar 11 20:13:08.948: INFO: Trying to get logs from node node1 pod pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587 container configmap-volume-test: 
STEP: delete the pod
Mar 11 20:13:08.961: INFO: Waiting for pod pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587 to disappear
Mar 11 20:13:08.963: INFO: Pod pod-configmaps-89cd89d6-8459-4911-8fb3-7afc5ee65587 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:13:08.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9945" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4604,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:13:08.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5243
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 11 20:13:09.433: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 11 20:13:11.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090389, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090389, loc:(*time.Location)(0x7b4c620)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090389, loc:(*time.Location)(0x7b4c620)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751090389, loc:(*time.Location)(0x7b4c620)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 11 20:13:14.454: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:13:14.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8147-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:13:20.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5243" for this suite.
STEP: Destroying namespace "webhook-5243-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.600 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":268,"skipped":4631,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:13:20.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1425
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-0fa52ac4-6922-4c87-b826-56d9b6d0ff12
STEP: Creating a pod to test consume configMaps
Mar 11 20:13:20.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513" in namespace "configmap-1425" to be "Succeeded or Failed"
Mar 11 20:13:20.717: INFO: Pod "pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814111ms
Mar 11 20:13:22.722: INFO: Pod "pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009351137s
Mar 11 20:13:24.725: INFO: Pod "pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012432425s
STEP: Saw pod success
Mar 11 20:13:24.725: INFO: Pod "pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513" satisfied condition "Succeeded or Failed"
Mar 11 20:13:24.729: INFO: Trying to get logs from node node1 pod pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513 container configmap-volume-test: 
STEP: delete the pod
Mar 11 20:13:24.743: INFO: Waiting for pod pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513 to disappear
Mar 11 20:13:24.746: INFO: Pod pod-configmaps-50b053cf-55d1-4b8c-86f3-6301900d4513 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:13:24.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1425" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4635,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:13:24.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8623
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 11 20:13:24.895: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 20:13:26.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Pending, waiting for it to be Running (with Ready = true)
Mar 11 20:13:28.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:30.897: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:32.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:34.897: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:36.899: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:38.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:40.899: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:42.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:44.898: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:46.904: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:48.903: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = false)
Mar 11 20:13:50.900: INFO: The status of Pod test-webserver-380e4b88-bdbb-47cb-a475-4541d42a99d6 is Running (Ready = true)
Mar 11 20:13:50.903: INFO: Container started at 2021-03-11 20:13:27 +0000 UTC, pod became ready at 2021-03-11 20:13:50 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:13:50.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8623" for this suite.

• [SLOW TEST:26.155 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4680,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:13:50.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-4685
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4685
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4685
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4685
Mar 11 20:13:51.047: INFO: Found 0 stateful pods, waiting for 1
Mar 11 20:14:01.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Mar 11 20:14:01.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 20:14:01.408: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 20:14:01.408: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 20:14:01.408: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 20:14:01.411: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 11 20:14:11.415: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 20:14:11.415: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 20:14:11.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999531s
Mar 11 20:14:12.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995590135s
Mar 11 20:14:13.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991689503s
Mar 11 20:14:14.438: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988453248s
Mar 11 20:14:15.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985151436s
Mar 11 20:14:16.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979318393s
Mar 11 20:14:17.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.975486107s
Mar 11 20:14:18.455: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971490489s
Mar 11 20:14:19.460: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.967410552s
Mar 11 20:14:20.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 961.152828ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4685
Mar 11 20:14:21.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 20:14:21.756: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 20:14:21.756: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 20:14:21.756: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 20:14:21.759: INFO: Found 1 stateful pods, waiting for 3
Mar 11 20:14:31.762: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 20:14:31.762: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 20:14:31.762: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Mar 11 20:14:41.764: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 20:14:41.764: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 11 20:14:41.764: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Mar 11 20:14:41.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 20:14:42.033: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 20:14:42.033: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 20:14:42.033: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 20:14:42.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 20:14:42.289: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 20:14:42.289: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 20:14:42.289: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 20:14:42.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 11 20:14:42.532: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar 11 20:14:42.532: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 11 20:14:42.532: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar 11 20:14:42.532: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 20:14:42.535: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Mar 11 20:14:52.544: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 20:14:52.544: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 20:14:52.544: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 11 20:14:52.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999477s
Mar 11 20:14:53.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995696933s
Mar 11 20:14:54.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991574766s
Mar 11 20:14:55.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986109299s
Mar 11 20:14:56.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980690092s
Mar 11 20:14:57.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975887222s
Mar 11 20:14:58.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970439652s
Mar 11 20:14:59.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964558638s
Mar 11 20:15:00.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961237039s
Mar 11 20:15:01.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.774082ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4685
Mar 11 20:15:02.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 20:15:02.869: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 20:15:02.869: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 20:15:02.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 20:15:02.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 20:15:03.131: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 20:15:03.131: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 20:15:03.131: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 20:15:03.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4685 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 11 20:15:03.394: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 11 20:15:03.394: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 11 20:15:03.394: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 11 20:15:03.394: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 11 20:15:13.408: INFO: Deleting all statefulset in ns statefulset-4685
Mar 11 20:15:13.411: INFO: Scaling statefulset ss to 0
Mar 11 20:15:13.420: INFO: Waiting for statefulset status.replicas updated to 0
Mar 11 20:15:13.422: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:15:13.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4685" for this suite.

• [SLOW TEST:82.529 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":271,"skipped":4681,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 11 20:15:13.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3387
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Mar 11 20:15:13.569: INFO: >>> kubeConfig: /root/.kube/config
Mar 11 20:15:21.467: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 11 20:15:38.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3387" for this suite.

• [SLOW TEST:24.728 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":272,"skipped":4710,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
SSSSSSSSSMar 11 20:15:38.171: INFO: Running AfterSuite actions on all nodes
Mar 11 20:15:38.171: INFO: Running AfterSuite actions on node 1
Mar 11 20:15:38.171: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":272,"skipped":4719,"failed":3,"failures":["[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}


Summarizing 3 Failures:

[Fail] [sig-auth] ServiceAccounts [It] should mount an API token into pods  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:228

[Fail] [k8s.io] Pods [It] should contain environment variables for services [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:543

[Fail] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [It] works for CRD preserving unknown fields in an embedded object [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:235

Ran 275 of 4994 Specs in 5915.052 seconds
FAIL! -- 272 Passed | 3 Failed | 0 Pending | 4719 Skipped
--- FAIL: TestE2E (5915.14s)
FAIL