I0203 20:50:16.739351 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0203 20:50:16.739585 6 e2e.go:109] Starting e2e run "642bc712-2652-481b-b3e7-e8c70f7d3410" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1612385415 - Will randomize all specs Will run 278 of 4846 specs Feb 3 20:50:16.793: INFO: >>> kubeConfig: /root/.kube/config Feb 3 20:50:16.798: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 20:50:16.835: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 20:50:16.863: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 20:50:16.863: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 20:50:16.863: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 20:50:16.869: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 3 20:50:16.869: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 20:50:16.869: INFO: e2e test version: v1.17.16 Feb 3 20:50:16.870: INFO: kube-apiserver version: v1.17.11 Feb 3 20:50:16.870: INFO: >>> kubeConfig: /root/.kube/config Feb 3 20:50:16.876: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:50:16.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 3 20:50:16.957: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 20:50:16.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03" in namespace "downward-api-87" to be "success or failure" Feb 3 20:50:17.008: INFO: Pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03": Phase="Pending", Reason="", readiness=false. Elapsed: 20.29016ms Feb 3 20:50:19.306: INFO: Pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318060537s Feb 3 20:50:21.310: INFO: Pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03": Phase="Running", Reason="", readiness=true. Elapsed: 4.322289254s Feb 3 20:50:23.314: INFO: Pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.326086623s STEP: Saw pod success Feb 3 20:50:23.314: INFO: Pod "downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03" satisfied condition "success or failure" Feb 3 20:50:23.317: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03 container client-container: STEP: delete the pod Feb 3 20:50:23.334: INFO: Waiting for pod downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03 to disappear Feb 3 20:50:23.352: INFO: Pod downwardapi-volume-1bdc455e-02b9-4436-b0c5-b9db1fac4f03 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:50:23.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-87" for this suite. • [SLOW TEST:6.509 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":57,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:50:23.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 20:50:24.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 20:50:26.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982224, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982224, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982224, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982223, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 20:50:29.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:50:29.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1858" for this suite. STEP: Destroying namespace "webhook-1858-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.824 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":2,"skipped":61,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:50:29.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8a3c5515-af66-4db9-bb2f-a1a078884440 STEP: Creating a pod to test consume configMaps Feb 3 20:50:29.276: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c" in namespace "projected-3920" to be "success or failure" Feb 3 20:50:29.280: INFO: Pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834807ms Feb 3 20:50:31.284: INFO: Pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007658338s Feb 3 20:50:33.486: INFO: Pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209665411s Feb 3 20:50:35.494: INFO: Pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217819451s STEP: Saw pod success Feb 3 20:50:35.494: INFO: Pod "pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c" satisfied condition "success or failure" Feb 3 20:50:35.496: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c container projected-configmap-volume-test: STEP: delete the pod Feb 3 20:50:35.598: INFO: Waiting for pod pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c to disappear Feb 3 20:50:35.614: INFO: Pod pod-projected-configmaps-202b2113-92d1-4025-bd57-42ea27200b8c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:50:35.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3920" for this suite. • [SLOW TEST:6.411 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:50:35.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 20:51:01.776: INFO: Container started at 2021-02-03 20:50:38 +0000 UTC, pod became ready at 2021-02-03 20:51:00 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:01.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1513" for this suite. • [SLOW TEST:26.163 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":98,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:01.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2948" for this suite. • [SLOW TEST:7.285 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":5,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:09.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 3 20:51:21.197: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.198: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.240939 6 log.go:172] (0xc002dda4d0) (0xc00096ef00) Create stream I0203 20:51:21.240987 6 log.go:172] (0xc002dda4d0) (0xc00096ef00) Stream added, broadcasting: 1 I0203 20:51:21.245452 6 log.go:172] (0xc002dda4d0) Reply frame received for 1 I0203 20:51:21.245511 6 log.go:172] (0xc002dda4d0) (0xc000a62280) Create stream I0203 20:51:21.245529 6 log.go:172] (0xc002dda4d0) (0xc000a62280) Stream added, broadcasting: 3 I0203 20:51:21.246525 6 log.go:172] (0xc002dda4d0) Reply frame received for 3 I0203 20:51:21.246558 6 log.go:172] (0xc002dda4d0) (0xc00096f040) Create stream I0203 20:51:21.246572 6 log.go:172] (0xc002dda4d0) (0xc00096f040) Stream added, broadcasting: 5 I0203 20:51:21.247319 6 log.go:172] (0xc002dda4d0) Reply frame received for 5 I0203 20:51:21.344964 6 log.go:172] (0xc002dda4d0) Data frame received for 5 I0203 20:51:21.345031 6 log.go:172] (0xc00096f040) (5) Data frame handling I0203 20:51:21.345058 6 log.go:172] (0xc002dda4d0) Data frame received for 3 I0203 20:51:21.345070 6 log.go:172] (0xc000a62280) (3) Data frame handling I0203 20:51:21.345085 6 log.go:172] (0xc000a62280) (3) Data frame sent I0203 20:51:21.345095 6 log.go:172] (0xc002dda4d0) Data frame received for 3 I0203 20:51:21.345103 6 log.go:172] (0xc000a62280) (3) Data frame handling I0203 20:51:21.346758 6 log.go:172] (0xc002dda4d0) Data frame received for 1 I0203 20:51:21.346785 6 log.go:172] (0xc00096ef00) (1) Data frame handling I0203 20:51:21.346801 6 log.go:172] (0xc00096ef00) (1) Data frame sent I0203 20:51:21.346819 6 log.go:172] (0xc002dda4d0) (0xc00096ef00) Stream removed, broadcasting: 1 I0203 20:51:21.346841 6 log.go:172] (0xc002dda4d0) Go away received I0203 20:51:21.347227 6 log.go:172] (0xc002dda4d0) (0xc00096ef00) Stream removed, broadcasting: 1 I0203 20:51:21.347265 6 log.go:172] (0xc002dda4d0) (0xc000a62280) Stream removed, broadcasting: 3 I0203 20:51:21.347275 6 log.go:172] (0xc002dda4d0) (0xc00096f040) Stream removed, broadcasting: 5 Feb 3 20:51:21.347: INFO: Exec stderr: "" Feb 3 20:51:21.347: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.347: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.381971 6 log.go:172] (0xc002a4e000) (0xc00092ca00) Create stream I0203 20:51:21.382005 6 log.go:172] (0xc002a4e000) (0xc00092ca00) Stream added, broadcasting: 1 I0203 20:51:21.383907 6 log.go:172] (0xc002a4e000) Reply frame received for 1 I0203 20:51:21.383951 6 log.go:172] (0xc002a4e000) (0xc00018a460) Create stream I0203 20:51:21.383962 6 log.go:172] (0xc002a4e000) (0xc00018a460) Stream added, broadcasting: 3 I0203 20:51:21.384931 6 log.go:172] (0xc002a4e000) Reply frame received for 3 I0203 20:51:21.384968 6 log.go:172] (0xc002a4e000) (0xc00092cf00) Create stream I0203 20:51:21.384975 6 log.go:172] (0xc002a4e000) (0xc00092cf00) Stream added, broadcasting: 5 I0203 20:51:21.386125 6 log.go:172] (0xc002a4e000) Reply frame received for 5 I0203 20:51:21.451635 6 log.go:172] (0xc002a4e000) Data frame received for 3 I0203 20:51:21.451666 6 log.go:172] (0xc00018a460) (3) Data frame handling I0203 20:51:21.451675 6 log.go:172] (0xc00018a460) (3) Data frame sent I0203 20:51:21.451680 6 log.go:172] (0xc002a4e000) Data frame received for 3 I0203 20:51:21.451685 6 log.go:172] (0xc00018a460) (3) Data frame handling I0203 20:51:21.451711 6 log.go:172] (0xc002a4e000) Data frame received for 5 I0203 20:51:21.451719 6 log.go:172] (0xc00092cf00) (5) Data frame handling I0203 20:51:21.453538 6 log.go:172] (0xc002a4e000) Data frame received for 1 I0203 20:51:21.453563 6 log.go:172] (0xc00092ca00) (1) Data frame handling I0203 20:51:21.453671 6 log.go:172] (0xc00092ca00) (1) Data frame sent I0203 20:51:21.453706 6 log.go:172] (0xc002a4e000) (0xc00092ca00) Stream removed, broadcasting: 1 I0203 20:51:21.453802 6 log.go:172] (0xc002a4e000) (0xc00092ca00) Stream removed, broadcasting: 1 I0203 20:51:21.453857 6 log.go:172] (0xc002a4e000) (0xc00018a460) Stream removed, broadcasting: 3 I0203 20:51:21.453889 6 log.go:172] (0xc002a4e000) (0xc00092cf00) Stream removed, broadcasting: 5 Feb 3 20:51:21.453: INFO: Exec stderr: "" I0203 20:51:21.453950 6 log.go:172] (0xc002a4e000) Go away received Feb 3 20:51:21.453: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.453: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.496258 6 log.go:172] (0xc002a4e840) (0xc00092d900) Create stream I0203 20:51:21.496285 6 log.go:172] (0xc002a4e840) (0xc00092d900) Stream added, broadcasting: 1 I0203 20:51:21.498851 6 log.go:172] (0xc002a4e840) Reply frame received for 1 I0203 20:51:21.498897 6 log.go:172] (0xc002a4e840) (0xc000a62640) Create stream I0203 20:51:21.498910 6 log.go:172] (0xc002a4e840) (0xc000a62640) Stream added, broadcasting: 3 I0203 20:51:21.499860 6 log.go:172] (0xc002a4e840) Reply frame received for 3 I0203 20:51:21.499890 6 log.go:172] (0xc002a4e840) (0xc000d2f220) Create stream I0203 20:51:21.499908 6 log.go:172] (0xc002a4e840) (0xc000d2f220) Stream added, broadcasting: 5 I0203 20:51:21.500719 6 log.go:172] (0xc002a4e840) Reply frame received for 5 I0203 20:51:21.579411 6 log.go:172] (0xc002a4e840) Data frame received for 5 I0203 20:51:21.579471 6 log.go:172] (0xc000d2f220) (5) Data frame handling I0203 20:51:21.579506 6 log.go:172] (0xc002a4e840) Data frame received for 3 I0203 20:51:21.579533 6 log.go:172] (0xc000a62640) (3) Data frame handling I0203 20:51:21.579559 6 log.go:172] (0xc000a62640) (3) Data frame sent I0203 20:51:21.579578 6 log.go:172] (0xc002a4e840) Data frame received for 3 I0203 20:51:21.579600 6 log.go:172] (0xc000a62640) (3) Data frame handling I0203 20:51:21.581297 6 log.go:172] (0xc002a4e840) Data frame received for 1 I0203 20:51:21.581327 6 log.go:172] (0xc00092d900) (1) Data frame handling I0203 20:51:21.581348 6 log.go:172] (0xc00092d900) (1) Data frame sent I0203 20:51:21.581382 6 log.go:172] (0xc002a4e840) (0xc00092d900) Stream removed, broadcasting: 1 I0203 20:51:21.581422 6 log.go:172] (0xc002a4e840) Go away received I0203 20:51:21.581475 6 log.go:172] (0xc002a4e840) (0xc00092d900) Stream removed, broadcasting: 1 I0203 20:51:21.581498 6 log.go:172] (0xc002a4e840) (0xc000a62640) Stream removed, broadcasting: 3 I0203 20:51:21.581512 6 log.go:172] (0xc002a4e840) (0xc000d2f220) Stream removed, broadcasting: 5 Feb 3 20:51:21.581: INFO: Exec stderr: "" Feb 3 20:51:21.581: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.581: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.616559 6 log.go:172] (0xc002d94580) (0xc000a62d20) Create stream I0203 20:51:21.616593 6 log.go:172] (0xc002d94580) (0xc000a62d20) Stream added, broadcasting: 1 I0203 20:51:21.618966 6 log.go:172] (0xc002d94580) Reply frame received for 1 I0203 20:51:21.619005 6 log.go:172] (0xc002d94580) (0xc00018b180) Create stream I0203 20:51:21.619014 6 log.go:172] (0xc002d94580) (0xc00018b180) Stream added, broadcasting: 3 I0203 20:51:21.619778 6 log.go:172] (0xc002d94580) Reply frame received for 3 I0203 20:51:21.619805 6 log.go:172] (0xc002d94580) (0xc00018b4a0) Create stream I0203 20:51:21.619815 6 log.go:172] (0xc002d94580) (0xc00018b4a0) Stream added, broadcasting: 5 I0203 20:51:21.620419 6 log.go:172] (0xc002d94580) Reply frame received for 5 I0203 20:51:21.679181 6 log.go:172] (0xc002d94580) Data frame received for 5 I0203 20:51:21.679236 6 log.go:172] (0xc00018b4a0) (5) Data frame handling I0203 20:51:21.679267 6 log.go:172] (0xc002d94580) Data frame received for 3 I0203 20:51:21.679280 6 log.go:172] (0xc00018b180) (3) Data frame handling I0203 20:51:21.679296 6 log.go:172] (0xc00018b180) (3) Data frame sent I0203 20:51:21.679313 6 log.go:172] (0xc002d94580) Data frame received for 3 I0203 20:51:21.679322 6 log.go:172] (0xc00018b180) (3) Data frame handling I0203 20:51:21.681158 6 log.go:172] (0xc002d94580) Data frame received for 1 I0203 20:51:21.681173 6 log.go:172] (0xc000a62d20) (1) Data frame handling I0203 20:51:21.681179 6 log.go:172] (0xc000a62d20) (1) Data frame sent I0203 20:51:21.681187 6 log.go:172] (0xc002d94580) (0xc000a62d20) Stream removed, broadcasting: 1 I0203 20:51:21.681274 6 log.go:172] (0xc002d94580) (0xc000a62d20) Stream removed, broadcasting: 1 I0203 20:51:21.681290 6 log.go:172] (0xc002d94580) (0xc00018b180) Stream removed, broadcasting: 3 I0203 20:51:21.681455 6 log.go:172] (0xc002d94580) Go away received I0203 20:51:21.681549 6 log.go:172] (0xc002d94580) (0xc00018b4a0) Stream removed, broadcasting: 5 Feb 3 20:51:21.681: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 3 20:51:21.681: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.681: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.712980 6 log.go:172] (0xc002a4edc0) (0xc00092de00) Create stream I0203 20:51:21.713008 6 log.go:172] (0xc002a4edc0) (0xc00092de00) Stream added, broadcasting: 1 I0203 20:51:21.715879 6 log.go:172] (0xc002a4edc0) Reply frame received for 1 I0203 20:51:21.715923 6 log.go:172] (0xc002a4edc0) (0xc00018bc20) Create stream I0203 20:51:21.715936 6 log.go:172] (0xc002a4edc0) (0xc00018bc20) Stream added, broadcasting: 3 I0203 20:51:21.716985 6 log.go:172] (0xc002a4edc0) Reply frame received for 3 I0203 20:51:21.717022 6 log.go:172] (0xc002a4edc0) (0xc00043fea0) Create stream I0203 20:51:21.717033 6 log.go:172] (0xc002a4edc0) (0xc00043fea0) Stream added, broadcasting: 5 I0203 20:51:21.717996 6 log.go:172] (0xc002a4edc0) Reply frame received for 5 I0203 20:51:21.771910 6 log.go:172] (0xc002a4edc0) Data frame received for 5 I0203 20:51:21.771938 6 log.go:172] (0xc00043fea0) (5) Data frame handling I0203 20:51:21.771959 6 log.go:172] (0xc002a4edc0) Data frame received for 3 I0203 20:51:21.771967 6 log.go:172] (0xc00018bc20) (3) Data frame handling I0203 20:51:21.771983 6 log.go:172] (0xc00018bc20) (3) Data frame sent I0203 20:51:21.771991 6 log.go:172] (0xc002a4edc0) Data frame received for 3 I0203 20:51:21.771998 6 log.go:172] (0xc00018bc20) (3) Data frame handling I0203 20:51:21.773074 6 log.go:172] (0xc002a4edc0) Data frame received for 1 I0203 20:51:21.773093 6 log.go:172] (0xc00092de00) (1) Data frame handling I0203 20:51:21.773104 6 log.go:172] (0xc00092de00) (1) Data frame sent I0203 20:51:21.773119 6 log.go:172] (0xc002a4edc0) (0xc00092de00) Stream removed, broadcasting: 1 I0203 20:51:21.773134 6 log.go:172] (0xc002a4edc0) Go away received I0203 20:51:21.773278 6 log.go:172] (0xc002a4edc0) (0xc00092de00) Stream removed, broadcasting: 1 I0203 20:51:21.773313 6 log.go:172] (0xc002a4edc0) (0xc00018bc20) Stream removed, broadcasting: 3 I0203 20:51:21.773327 6 log.go:172] (0xc002a4edc0) (0xc00043fea0) Stream removed, broadcasting: 5 Feb 3 20:51:21.773: INFO: Exec stderr: "" Feb 3 20:51:21.773: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.773: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.805983 6 log.go:172] (0xc002ddab00) (0xc00096f5e0) Create stream I0203 20:51:21.806036 6 log.go:172] (0xc002ddab00) (0xc00096f5e0) Stream added, broadcasting: 1 I0203 20:51:21.811583 6 log.go:172] (0xc002ddab00) Reply frame received for 1 I0203 20:51:21.811643 6 log.go:172] (0xc002ddab00) (0xc00096f720) Create stream I0203 20:51:21.811669 6 log.go:172] (0xc002ddab00) (0xc00096f720) Stream added, broadcasting: 3 I0203 20:51:21.813311 6 log.go:172] (0xc002ddab00) Reply frame received for 3 I0203 20:51:21.813359 6 log.go:172] (0xc002ddab00) (0xc00032e8c0) Create stream I0203 20:51:21.813374 6 log.go:172] (0xc002ddab00) (0xc00032e8c0) Stream added, broadcasting: 5 I0203 20:51:21.814313 6 log.go:172] (0xc002ddab00) Reply frame received for 5 I0203 20:51:21.880280 6 log.go:172] (0xc002ddab00) Data frame received for 3 I0203 20:51:21.880311 6 log.go:172] (0xc00096f720) (3) Data frame handling I0203 20:51:21.880319 6 log.go:172] (0xc00096f720) (3) Data frame sent I0203 20:51:21.880324 6 log.go:172] (0xc002ddab00) Data frame received for 3 I0203 20:51:21.880329 6 log.go:172] (0xc00096f720) (3) Data frame handling I0203 20:51:21.880369 6 log.go:172] (0xc002ddab00) Data frame received for 5 I0203 20:51:21.880400 6 log.go:172] (0xc00032e8c0) (5) Data frame handling I0203 20:51:21.882250 6 log.go:172] (0xc002ddab00) Data frame received for 1 I0203 20:51:21.882279 6 log.go:172] (0xc00096f5e0) (1) Data frame handling I0203 20:51:21.882291 6 log.go:172] (0xc00096f5e0) (1) Data frame sent I0203 20:51:21.882304 6 log.go:172] (0xc002ddab00) (0xc00096f5e0) Stream removed, broadcasting: 1 I0203 20:51:21.882336 6 log.go:172] (0xc002ddab00) Go away received I0203 20:51:21.882379 6 log.go:172] (0xc002ddab00) (0xc00096f5e0) Stream removed, broadcasting: 1 I0203 20:51:21.882398 6 log.go:172] (0xc002ddab00) (0xc00096f720) Stream removed, broadcasting: 3 I0203 20:51:21.882413 6 log.go:172] (0xc002ddab00) (0xc00032e8c0) Stream removed, broadcasting: 5 Feb 3 20:51:21.882: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 3 20:51:21.882: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.882: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:21.920486 6 log.go:172] (0xc002a4f3f0) (0xc0002d34a0) Create stream I0203 20:51:21.920516 6 log.go:172] (0xc002a4f3f0) (0xc0002d34a0) Stream added, broadcasting: 1 I0203 20:51:21.923045 6 log.go:172] (0xc002a4f3f0) Reply frame received for 1 I0203 20:51:21.923129 6 log.go:172] (0xc002a4f3f0) (0xc000a62e60) Create stream I0203 20:51:21.923137 6 log.go:172] (0xc002a4f3f0) (0xc000a62e60) Stream added, broadcasting: 3 I0203 20:51:21.923866 6 log.go:172] (0xc002a4f3f0) Reply frame received for 3 I0203 20:51:21.923892 6 log.go:172] (0xc002a4f3f0) (0xc000a63540) Create stream I0203 20:51:21.923901 6 log.go:172] (0xc002a4f3f0) (0xc000a63540) Stream added, broadcasting: 5 I0203 20:51:21.924749 6 log.go:172] (0xc002a4f3f0) Reply frame received for 5 I0203 20:51:21.979480 6 log.go:172] (0xc002a4f3f0) Data frame received for 3 I0203 20:51:21.979507 6 log.go:172] (0xc000a62e60) (3) Data frame handling I0203 20:51:21.979514 6 log.go:172] (0xc000a62e60) (3) Data frame sent I0203 20:51:21.979519 6 log.go:172] (0xc002a4f3f0) Data frame received for 3 I0203 20:51:21.979524 6 log.go:172] (0xc000a62e60) (3) Data frame handling I0203 20:51:21.979548 6 log.go:172] (0xc002a4f3f0) Data frame received for 5 I0203 20:51:21.979581 6 log.go:172] (0xc000a63540) (5) Data frame handling I0203 20:51:21.980975 6 log.go:172] (0xc002a4f3f0) Data frame received for 1 I0203 20:51:21.981003 6 log.go:172] (0xc0002d34a0) (1) Data frame handling I0203 20:51:21.981023 6 log.go:172] (0xc0002d34a0) (1) Data frame sent I0203 20:51:21.981040 6 log.go:172] (0xc002a4f3f0) (0xc0002d34a0) Stream removed, broadcasting: 1 I0203 20:51:21.981073 6 log.go:172] (0xc002a4f3f0) Go away received I0203 20:51:21.981135 6 log.go:172] (0xc002a4f3f0) (0xc0002d34a0) Stream removed, broadcasting: 1 I0203 20:51:21.981159 6 log.go:172] (0xc002a4f3f0) (0xc000a62e60) Stream removed, broadcasting: 3 I0203 20:51:21.981181 6 log.go:172] (0xc002a4f3f0) (0xc000a63540) Stream removed, broadcasting: 5 Feb 3 20:51:21.981: INFO: Exec stderr: "" Feb 3 20:51:21.981: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:21.981: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:22.009091 6 log.go:172] (0xc002a4fa20) (0xc0002d3d60) Create stream I0203 20:51:22.009122 6 log.go:172] (0xc002a4fa20) (0xc0002d3d60) Stream added, broadcasting: 1 I0203 20:51:22.013494 6 log.go:172] (0xc002a4fa20) Reply frame received for 1 I0203 20:51:22.013547 6 log.go:172] (0xc002a4fa20) (0xc000370960) Create stream I0203 20:51:22.013565 6 log.go:172] (0xc002a4fa20) (0xc000370960) Stream added, broadcasting: 3 I0203 20:51:22.014982 6 log.go:172] (0xc002a4fa20) Reply frame received for 3 I0203 20:51:22.015036 6 log.go:172] (0xc002a4fa20) (0xc000552f00) Create stream I0203 20:51:22.015051 6 log.go:172] (0xc002a4fa20) (0xc000552f00) Stream added, broadcasting: 5 I0203 20:51:22.016495 6 log.go:172] (0xc002a4fa20) Reply frame received for 5 I0203 20:51:22.084102 6 log.go:172] (0xc002a4fa20) Data frame received for 3 I0203 20:51:22.084144 6 log.go:172] (0xc000370960) (3) Data frame handling I0203 20:51:22.084161 6 log.go:172] (0xc000370960) (3) Data frame sent I0203 20:51:22.084575 6 log.go:172] (0xc002a4fa20) Data frame received for 5 I0203 20:51:22.084604 6 log.go:172] (0xc000552f00) (5) Data frame handling I0203 20:51:22.084633 6 log.go:172] (0xc002a4fa20) Data frame received for 3 I0203 20:51:22.084659 6 log.go:172] (0xc000370960) (3) Data frame handling I0203 20:51:22.086090 6 log.go:172] (0xc002a4fa20) Data frame received for 1 I0203 20:51:22.086127 6 log.go:172] (0xc0002d3d60) (1) Data frame handling I0203 20:51:22.086142 6 log.go:172] (0xc0002d3d60) (1) Data frame sent I0203 20:51:22.086153 6 log.go:172] (0xc002a4fa20) (0xc0002d3d60) Stream removed, broadcasting: 1 I0203 20:51:22.086172 6 log.go:172] (0xc002a4fa20) Go away received I0203 20:51:22.086250 6 log.go:172] (0xc002a4fa20) (0xc0002d3d60) Stream removed, broadcasting: 1 I0203 20:51:22.086276 6 log.go:172] (0xc002a4fa20) (0xc000370960) Stream removed, broadcasting: 3 I0203 20:51:22.086287 6 log.go:172] (0xc002a4fa20) (0xc000552f00) Stream removed, broadcasting: 5 Feb 3 20:51:22.086: INFO: Exec stderr: "" Feb 3 20:51:22.086: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:22.086: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:22.151049 6 log.go:172] (0xc002d94d10) (0xc000a63d60) Create stream I0203 20:51:22.151086 6 log.go:172] (0xc002d94d10) (0xc000a63d60) Stream added, broadcasting: 1 I0203 20:51:22.153580 6 log.go:172] (0xc002d94d10) Reply frame received for 1 I0203 20:51:22.153632 6 log.go:172] (0xc002d94d10) (0xc000d2f4a0) Create stream I0203 20:51:22.153646 6 log.go:172] (0xc002d94d10) (0xc000d2f4a0) Stream added, broadcasting: 3 I0203 20:51:22.154541 6 log.go:172] (0xc002d94d10) Reply frame received for 3 I0203 20:51:22.154577 6 log.go:172] (0xc002d94d10) (0xc000d2f540) Create stream I0203 20:51:22.154592 6 log.go:172] (0xc002d94d10) (0xc000d2f540) Stream added, broadcasting: 5 I0203 20:51:22.155579 6 log.go:172] (0xc002d94d10) Reply frame received for 5 I0203 20:51:22.218548 6 log.go:172] (0xc002d94d10) Data frame received for 5 I0203 20:51:22.218575 6 log.go:172] (0xc000d2f540) (5) Data frame handling I0203 20:51:22.218596 6 log.go:172] (0xc002d94d10) Data frame received for 3 I0203 20:51:22.218615 6 log.go:172] (0xc000d2f4a0) (3) Data frame handling I0203 20:51:22.218623 6 log.go:172] (0xc000d2f4a0) (3) Data frame sent I0203 20:51:22.218630 6 log.go:172] (0xc002d94d10) Data frame received for 3 I0203 20:51:22.218635 6 log.go:172] (0xc000d2f4a0) (3) Data frame handling I0203 20:51:22.219554 6 log.go:172] (0xc002d94d10) Data frame received for 1 I0203 20:51:22.219573 6 log.go:172] (0xc000a63d60) (1) Data frame handling I0203 20:51:22.219584 6 log.go:172] (0xc000a63d60) (1) Data frame sent I0203 20:51:22.219593 6 log.go:172] (0xc002d94d10) (0xc000a63d60) Stream removed, broadcasting: 1 I0203 20:51:22.219674 6 log.go:172] (0xc002d94d10) Go away received I0203 20:51:22.219715 6 log.go:172] (0xc002d94d10) (0xc000a63d60) Stream removed, broadcasting: 1 I0203 20:51:22.219735 6 log.go:172] (0xc002d94d10) (0xc000d2f4a0) Stream removed, broadcasting: 3 I0203 20:51:22.219744 6 log.go:172] (0xc002d94d10) (0xc000d2f540) Stream removed, broadcasting: 5 Feb 3 20:51:22.219: INFO: Exec stderr: "" Feb 3 20:51:22.219: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-152 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 20:51:22.219: INFO: >>> kubeConfig: /root/.kube/config I0203 20:51:22.246488 6 log.go:172] (0xc00303c0b0) (0xc000371540) Create stream I0203 20:51:22.246513 6 log.go:172] (0xc00303c0b0) (0xc000371540) Stream added, broadcasting: 1 I0203 20:51:22.248657 6 log.go:172] (0xc00303c0b0) Reply frame received for 1 I0203 20:51:22.248679 6 log.go:172] (0xc00303c0b0) (0xc000a63e00) Create stream I0203 20:51:22.248684 6 log.go:172] (0xc00303c0b0) (0xc000a63e00) Stream added, broadcasting: 3 I0203 20:51:22.249431 6 log.go:172] (0xc00303c0b0) Reply frame received for 3 I0203 20:51:22.249468 6 log.go:172] (0xc00303c0b0) (0xc000553ae0) Create stream I0203 20:51:22.249482 6 log.go:172] (0xc00303c0b0) (0xc000553ae0) Stream added, broadcasting: 5 I0203 20:51:22.250170 6 log.go:172] (0xc00303c0b0) Reply frame received for 5 I0203 20:51:22.325400 6 log.go:172] (0xc00303c0b0) Data frame received for 3 I0203 20:51:22.325454 6 log.go:172] (0xc000a63e00) (3) Data frame handling I0203 20:51:22.325481 6 log.go:172] (0xc000a63e00) (3) Data frame sent I0203 20:51:22.325501 6 log.go:172] (0xc00303c0b0) Data frame received for 3 I0203 20:51:22.325531 6 log.go:172] (0xc000a63e00) (3) Data frame handling I0203 20:51:22.325583 6 log.go:172] (0xc00303c0b0) Data frame received for 5 I0203 20:51:22.325603 6 log.go:172] (0xc000553ae0) (5) Data frame handling I0203 20:51:22.326735 6 log.go:172] (0xc00303c0b0) Data frame received for 1 I0203 20:51:22.326768 6 log.go:172] (0xc000371540) (1) Data frame handling I0203 20:51:22.326789 6 log.go:172] (0xc000371540) (1) Data frame sent I0203 20:51:22.326811 6 log.go:172] (0xc00303c0b0) (0xc000371540) Stream removed, broadcasting: 1 I0203 20:51:22.326856 6 log.go:172] (0xc00303c0b0) Go away received I0203 20:51:22.326917 6 log.go:172] (0xc00303c0b0) (0xc000371540) Stream removed, broadcasting: 1 I0203 20:51:22.326937 6 log.go:172] (0xc00303c0b0) (0xc000a63e00) Stream removed, broadcasting: 3 I0203 20:51:22.326949 6 log.go:172] (0xc00303c0b0) (0xc000553ae0) Stream removed, broadcasting: 5 Feb 3 20:51:22.326: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:22.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-152" for this suite. • [SLOW TEST:13.267 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:22.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Feb 3 20:51:22.432: INFO: Waiting up to 5m0s for pod "pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4" in namespace "emptydir-4550" to be "success or failure" Feb 3 20:51:22.439: INFO: Pod "pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.289786ms Feb 3 20:51:24.443: INFO: Pod "pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011310483s Feb 3 20:51:26.447: INFO: Pod "pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015578766s STEP: Saw pod success Feb 3 20:51:26.447: INFO: Pod "pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4" satisfied condition "success or failure" Feb 3 20:51:26.449: INFO: Trying to get logs from node jerma-worker2 pod pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4 container test-container: STEP: delete the pod Feb 3 20:51:26.479: INFO: Waiting for pod pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4 to disappear Feb 3 20:51:26.570: INFO: Pod pod-4aa1aba9-b314-47d7-8e5b-7b8151c765f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:26.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4550" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":170,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:26.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 20:51:26.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6499' Feb 3 20:51:29.570: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 20:51:29.570: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 3 20:51:29.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6499' Feb 3 20:51:29.728: INFO: stderr: "" Feb 3 20:51:29.728: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:29.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6499" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":8,"skipped":172,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:29.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6350/configmap-test-2d93d74b-e10d-4a37-a10f-960742395738 STEP: Creating a pod to test consume configMaps Feb 3 20:51:29.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b" in namespace "configmap-6350" to be "success or failure" Feb 3 20:51:29.850: INFO: Pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028775ms Feb 3 20:51:32.093: INFO: Pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247589749s Feb 3 20:51:34.096: INFO: Pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25065845s Feb 3 20:51:36.100: INFO: Pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.254444448s STEP: Saw pod success Feb 3 20:51:36.100: INFO: Pod "pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b" satisfied condition "success or failure" Feb 3 20:51:36.103: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b container env-test: STEP: delete the pod Feb 3 20:51:36.183: INFO: Waiting for pod pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b to disappear Feb 3 20:51:36.191: INFO: Pod pod-configmaps-ed3e4f80-fe34-497a-a118-6979f289327b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:36.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6350" for this suite. • [SLOW TEST:6.481 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:36.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 20:51:36.708: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 20:51:38.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:51:40.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982296, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 20:51:43.790: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:44.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1627" for this suite. STEP: Destroying namespace "webhook-1627-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.277 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":10,"skipped":202,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:44.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9300 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9300 I0203 20:51:44.731490 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9300, replica count: 2 I0203 20:51:47.781960 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 20:51:50.782282 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 20:51:50.782: INFO: Creating new exec pod Feb 3 20:51:55.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9300 execpodnfb9s -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 3 20:51:56.077: INFO: stderr: "I0203 20:51:55.968682 84 log.go:172] (0xc000107130) (0xc000669cc0) Create stream\nI0203 20:51:55.968745 84 log.go:172] (0xc000107130) (0xc000669cc0) Stream added, broadcasting: 1\nI0203 20:51:55.985067 84 log.go:172] (0xc000107130) Reply frame received for 1\nI0203 20:51:55.985123 84 log.go:172] (0xc000107130) (0xc000669d60) Create stream\nI0203 20:51:55.985135 84 log.go:172] (0xc000107130) (0xc000669d60) Stream added, broadcasting: 3\nI0203 20:51:55.986067 84 log.go:172] (0xc000107130) Reply frame received for 3\nI0203 20:51:55.986110 84 log.go:172] (0xc000107130) (0xc000669e00) Create stream\nI0203 20:51:55.986122 84 log.go:172] (0xc000107130) (0xc000669e00) Stream added, broadcasting: 5\nI0203 20:51:55.986977 84 log.go:172] (0xc000107130) Reply frame received for 5\nI0203 20:51:56.063653 84 log.go:172] (0xc000107130) Data frame received for 5\nI0203 20:51:56.063712 84 log.go:172] (0xc000669e00) (5) Data frame handling\nI0203 20:51:56.063751 84 log.go:172] (0xc000669e00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0203 20:51:56.063851 84 log.go:172] (0xc000107130) Data frame received for 5\nI0203 20:51:56.063865 84 log.go:172] (0xc000669e00) (5) Data frame handling\nI0203 20:51:56.064622 84 log.go:172] (0xc000107130) Data frame received for 3\nI0203 20:51:56.064652 84 log.go:172] (0xc000669d60) (3) Data frame handling\nI0203 20:51:56.068036 84 log.go:172] (0xc000107130) Data frame received for 1\nI0203 20:51:56.068084 84 log.go:172] (0xc000669cc0) (1) Data frame handling\nI0203 20:51:56.068130 84 log.go:172] (0xc000669cc0) (1) Data frame sent\nI0203 20:51:56.068264 84 log.go:172] (0xc000107130) (0xc000669cc0) Stream removed, broadcasting: 1\nI0203 20:51:56.068388 84 log.go:172] (0xc000107130) Go away received\nI0203 20:51:56.068627 84 log.go:172] (0xc000107130) (0xc000669cc0) Stream removed, broadcasting: 1\nI0203 20:51:56.068642 84 log.go:172] (0xc000107130) (0xc000669d60) Stream removed, broadcasting: 3\nI0203 20:51:56.068650 84 log.go:172] (0xc000107130) (0xc000669e00) Stream removed, broadcasting: 5\n" Feb 3 20:51:56.077: INFO: stdout: "" Feb 3 20:51:56.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9300 execpodnfb9s -- /bin/sh -x -c nc -zv -t -w 2 10.96.131.68 80' Feb 3 20:51:56.292: INFO: stderr: "I0203 20:51:56.209080 107 log.go:172] (0xc0003c3130) (0xc0009f2000) Create stream\nI0203 20:51:56.209231 107 log.go:172] (0xc0003c3130) (0xc0009f2000) Stream added, broadcasting: 1\nI0203 20:51:56.212265 107 log.go:172] (0xc0003c3130) Reply frame received for 1\nI0203 20:51:56.212304 107 log.go:172] (0xc0003c3130) (0xc0009f2140) Create stream\nI0203 20:51:56.212312 107 log.go:172] (0xc0003c3130) (0xc0009f2140) Stream added, broadcasting: 3\nI0203 20:51:56.214550 107 log.go:172] (0xc0003c3130) Reply frame received for 3\nI0203 20:51:56.214590 107 log.go:172] (0xc0003c3130) (0xc0009f21e0) Create stream\nI0203 20:51:56.214603 107 log.go:172] (0xc0003c3130) (0xc0009f21e0) Stream added, broadcasting: 5\nI0203 20:51:56.220186 107 log.go:172] (0xc0003c3130) Reply frame received for 5\nI0203 20:51:56.286464 107 log.go:172] (0xc0003c3130) Data frame received for 3\nI0203 20:51:56.286496 107 log.go:172] (0xc0009f2140) (3) Data frame handling\nI0203 20:51:56.286517 107 log.go:172] (0xc0003c3130) Data frame received for 5\nI0203 20:51:56.286531 107 log.go:172] (0xc0009f21e0) (5) Data frame handling\nI0203 20:51:56.286540 107 log.go:172] (0xc0009f21e0) (5) Data frame sent\nI0203 20:51:56.286548 107 log.go:172] (0xc0003c3130) Data frame received for 5\nI0203 20:51:56.286555 107 log.go:172] (0xc0009f21e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.131.68 80\nConnection to 10.96.131.68 80 port [tcp/http] succeeded!\nI0203 20:51:56.287313 107 log.go:172] (0xc0003c3130) Data frame received for 1\nI0203 20:51:56.287330 107 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0203 20:51:56.287340 107 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0203 20:51:56.287351 107 log.go:172] (0xc0003c3130) (0xc0009f2000) Stream removed, broadcasting: 1\nI0203 20:51:56.287365 107 log.go:172] (0xc0003c3130) Go away received\nI0203 20:51:56.287759 107 log.go:172] (0xc0003c3130) (0xc0009f2000) Stream removed, broadcasting: 1\nI0203 20:51:56.287782 107 log.go:172] (0xc0003c3130) (0xc0009f2140) Stream removed, broadcasting: 3\nI0203 20:51:56.287802 107 log.go:172] (0xc0003c3130) (0xc0009f21e0) Stream removed, broadcasting: 5\n" Feb 3 20:51:56.292: INFO: stdout: "" Feb 3 20:51:56.292: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:51:56.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9300" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.850 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":11,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:51:56.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 3 20:51:56.388: INFO: >>> kubeConfig: /root/.kube/config Feb 3 20:51:58.962: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:09.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7574" for this suite. • [SLOW TEST:13.059 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":12,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:09.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 3 20:52:09.812: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 3 20:52:11.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982329, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982329, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982329, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982329, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 20:52:14.944: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 20:52:14.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:16.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5393" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.016 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":13,"skipped":256,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:16.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7d22cb91-2c3e-4e79-a04a-57480f3f17b9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7d22cb91-2c3e-4e79-a04a-57480f3f17b9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:22.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3530" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:22.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:22.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2801" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":15,"skipped":306,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:22.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 20:52:23.498: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 20:52:25.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982343, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982343, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982343, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982343, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 20:52:28.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:28.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5442" for this suite. STEP: Destroying namespace "webhook-5442-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.207 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":16,"skipped":314,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:29.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 20:52:29.155: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:39.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1418" for this suite. • [SLOW TEST:10.484 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":17,"skipped":325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:39.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 20:52:39.860: INFO: Creating deployment "webserver-deployment" Feb 3 20:52:39.863: INFO: Waiting for observed generation 1 Feb 3 20:52:41.869: INFO: Waiting for all required pods to come up Feb 3 20:52:41.874: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 3 20:52:53.896: INFO: Waiting for deployment "webserver-deployment" to complete Feb 3 20:52:53.904: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 3 20:52:53.911: INFO: Updating deployment webserver-deployment Feb 3 20:52:53.911: INFO: Waiting for observed generation 2 Feb 3 20:52:55.977: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 3 20:52:55.979: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 3 20:52:55.982: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 3 20:52:55.989: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 3 20:52:55.989: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 3 20:52:55.992: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 3 20:52:55.996: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 3 20:52:55.996: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 3 20:52:56.001: INFO: Updating deployment webserver-deployment Feb 3 20:52:56.001: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 3 20:52:56.255: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 3 20:52:56.392: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 20:52:57.304: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5189 /apis/apps/v1/namespaces/deployment-5189/deployments/webserver-deployment 60a20156-a8e9-4c07-a11f-8c563fe5bbd1 6374560 3 2021-02-03 20:52:39 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00299ab18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2021-02-03 20:52:54 +0000 UTC,LastTransitionTime:2021-02-03 20:52:39 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-03 20:52:56 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 3 20:52:57.475: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5189 /apis/apps/v1/namespaces/deployment-5189/replicasets/webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 6374624 3 2021-02-03 20:52:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 60a20156-a8e9-4c07-a11f-8c563fe5bbd1 0xc0025f8137 0xc0025f8138}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f81b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 20:52:57.475: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 3 20:52:57.475: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5189 /apis/apps/v1/namespaces/deployment-5189/replicasets/webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 6374608 3 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 60a20156-a8e9-4c07-a11f-8c563fe5bbd1 0xc0025f8037 0xc0025f8038}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f80a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 3 20:52:57.713: INFO: Pod "webserver-deployment-595b5b9587-8bl9x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8bl9x webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-8bl9x b4ff29d9-4204-46c1-878d-927746758fd3 6374602 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f86e7 0xc0025f86e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.713: INFO: Pod "webserver-deployment-595b5b9587-8mmx8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8mmx8 webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-8mmx8 93cd0e01-c454-4041-8ad5-e147c0bd11a3 6374629 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f8817 0xc0025f8818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 20:52:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.713: INFO: Pod "webserver-deployment-595b5b9587-8mzhk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8mzhk webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-8mzhk 506c552c-86c9-49c6-acd5-229abbd68ad4 6374601 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f8a57 0xc0025f8a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.713: INFO: Pod "webserver-deployment-595b5b9587-96fjl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-96fjl webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-96fjl a482362a-591b-4ae9-8c90-1e0623486806 6374423 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f8ba7 0xc0025f8ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.188,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0796c1d9b7f8694581302301a1f21934d84a3c7dad02566a22c20ecd524fd5c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.713: INFO: Pod "webserver-deployment-595b5b9587-9svpr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9svpr webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-9svpr 30eb8348-77b7-436f-9d8b-703a195a6dad 6374382 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f8dc7 0xc0025f8dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.187,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aa3fa7c15f62d154b62de7ba069b812bb2f1445cc4e7309f03047ef50ccc0e30,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.714: INFO: Pod "webserver-deployment-595b5b9587-9tqlh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9tqlh webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-9tqlh e65f2562-d575-4859-88f8-b044558265da 6374577 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f8f77 0xc0025f8f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.714: INFO: Pod "webserver-deployment-595b5b9587-cqnh9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cqnh9 webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-cqnh9 faf2f4ad-2c90-4021-9a21-22d0a876a6fc 6374430 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f90f7 0xc0025f90f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.157,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://724e04a90e397dbd24a57ea499f74a0570426fb17f940e1bb1393e95f1cb7898,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.714: INFO: Pod "webserver-deployment-595b5b9587-fjrsj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fjrsj webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-fjrsj 99e273b8-c404-4c33-ab0c-d48a7fed7a74 6374604 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f92c7 0xc0025f92c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.714: INFO: Pod "webserver-deployment-595b5b9587-gt79f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gt79f webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-gt79f 2defb9c4-0a9a-40cb-bfe0-b1cb30a60c87 6374603 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9417 0xc0025f9418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-hq7nv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hq7nv webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-hq7nv f90bd2ed-0063-488a-b680-08476704300c 6374622 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9547 0xc0025f9548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2021-02-03 20:52:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-jj6w7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jj6w7 webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-jj6w7 e3de13c9-87fd-417a-961a-090885651901 6374600 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9737 0xc0025f9738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-js8mv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-js8mv webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-js8mv 8528d09a-cdf2-414c-9e71-9ef7e6331485 6374574 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9887 0xc0025f9888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-ks9xn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ks9xn webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-ks9xn 7e9436cf-5722-4fdb-9f78-bddf225b97cf 6374483 0 2021-02-03 20:52:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f99e7 0xc0025f99e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.190,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://eee7b39962ed4658a544e5d634ad2db4dc3b0a0c32b86cce8bb5c3afe72d3456,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-m89mp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m89mp webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-m89mp ca9e8624-2372-48d4-b8b0-019dbb8a13f4 6374417 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9bb7 0xc0025f9bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.156,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f1f94fc3344b642a9f81e9f2a8d8949bdb465da01c737262f9978671223de331,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.724: INFO: Pod "webserver-deployment-595b5b9587-ngmqq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ngmqq webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-ngmqq c6b68619-5693-4626-8f0d-5c9d7204b978 6374486 0 2021-02-03 20:52:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9d57 0xc0025f9d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.191,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://49cf02ed254e4d47d128fb8306bfa01852370e4fbf7581bf4eb298c4ce7d31be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-595b5b9587-qm88x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qm88x webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-qm88x 0298028a-82e6-4c23-bb3b-f5c6ab67b329 6374580 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0025f9ee7 0xc0025f9ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-595b5b9587-rt2sx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rt2sx webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-rt2sx 4e68c11d-d520-49d0-b25a-8f779a7c8864 6374607 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0024d4077 0xc0024d4078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 20:52:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-595b5b9587-tnhdn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tnhdn webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-tnhdn 9ca4c883-badd-43b2-8a71-d4b84de722b6 6374579 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0024d4267 0xc0024d4268}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-595b5b9587-v7zrw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v7zrw webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-v7zrw f18eabaf-5c27-46d0-bbd8-ae28a996afe0 6374455 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0024d4487 0xc0024d4488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.159,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://10d6617f351361bef9f5f89bb2ba69a953e5b98f8738edabb3afacba98683be2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-595b5b9587-wgqsv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wgqsv webserver-deployment-595b5b9587- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-595b5b9587-wgqsv 8d01cbf6-9ec3-4dbb-9372-578f61c83307 6374444 0 2021-02-03 20:52:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 763f33a0-3da0-4f70-be58-f8743ddc7bc3 0xc0024d46b7 0xc0024d46b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.189,StartTime:2021-02-03 20:52:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:52:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a39b84bcec1dd918f7e87564987aa2cfe454d3c94c06c42bf2e542f85d7a3007,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-c7997dcc8-44lr4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-44lr4 webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-44lr4 39b7e1fd-0c03-4607-9db0-738d112778ef 6374597 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d4877 0xc0024d4878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.725: INFO: Pod "webserver-deployment-c7997dcc8-9lp2r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9lp2r webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-9lp2r b5435006-7e70-4719-a1f8-7e845e028ee3 6374575 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d4a07 0xc0024d4a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-dfmfh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dfmfh webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-dfmfh 9ee5a2f2-80d4-4805-8215-6c0cf6808bf7 6374563 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d4b97 0xc0024d4b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-k7nrs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k7nrs webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-k7nrs 50600615-dddb-4bc8-9c25-2edf88369fae 6374598 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d4d67 0xc0024d4d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-k95rd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k95rd webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-k95rd 3ccf6df9-a43a-4330-bb60-052682702586 6374596 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d4ed7 0xc0024d4ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-l9jk4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l9jk4 webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-l9jk4 edd8dc6c-1c18-4e41-b6d0-54bc32e5fb3c 6374516 0 2021-02-03 20:52:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d50a7 0xc0024d50a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2021-02-03 20:52:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-ppmzj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ppmzj webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-ppmzj a4cada7b-8493-4f5c-b4bd-455b7b8506c5 6374613 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5257 0xc0024d5258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.726: INFO: Pod "webserver-deployment-c7997dcc8-ptw5k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ptw5k webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-ptw5k 4e5a64e5-734d-4f2f-82e9-1af2ac6df8b8 6374573 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5397 0xc0024d5398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.727: INFO: Pod "webserver-deployment-c7997dcc8-q4hr2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q4hr2 webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-q4hr2 03f37823-e13a-4ff6-b848-1542eaaf0d02 6374526 0 2021-02-03 20:52:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5507 0xc0024d5508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 20:52:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.727: INFO: Pod "webserver-deployment-c7997dcc8-q7dgc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q7dgc webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-q7dgc 5fc3b65d-685c-4186-98bc-6aa03551280f 6374536 0 2021-02-03 20:52:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5717 0xc0024d5718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2021-02-03 20:52:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.727: INFO: Pod "webserver-deployment-c7997dcc8-rhvw4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rhvw4 webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-rhvw4 e23aac63-ba80-4235-84ff-52e9a385262f 6374538 0 2021-02-03 20:52:54 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d58f7 0xc0024d58f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 20:52:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.727: INFO: Pod "webserver-deployment-c7997dcc8-rnmvr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rnmvr webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-rnmvr 6679eba7-cbb5-4499-a89e-725c0014735f 6374599 0 2021-02-03 20:52:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5aa7 0xc0024d5aa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 20:52:57.728: INFO: Pod "webserver-deployment-c7997dcc8-z4f65" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z4f65 webserver-deployment-c7997dcc8- deployment-5189 /api/v1/namespaces/deployment-5189/pods/webserver-deployment-c7997dcc8-z4f65 0c97ea0c-7f47-43a2-8217-a507b097e455 6374512 0 2021-02-03 20:52:53 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2c463b2a-de93-4a91-b248-d0bac42ea4da 0xc0024d5c07 0xc0024d5c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6njbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6njbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6njbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:52:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 20:52:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:52:57.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5189" for this suite. • [SLOW TEST:18.403 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":18,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:52:57.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 20:52:58.088: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:53:18.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9991" for this suite. • [SLOW TEST:20.521 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":394,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:53:18.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:53:25.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4140" for this suite. • [SLOW TEST:7.543 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":20,"skipped":396,"failed":0} [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:53:25.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Feb 3 20:53:26.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 3 20:53:26.552: INFO: stderr: "" Feb 3 20:53:26.552: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40039\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40039/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:53:26.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6761" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":21,"skipped":396,"failed":0} ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:53:26.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 20:53:27.077: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 3 20:53:32.082: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 20:53:32.082: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 3 20:53:34.087: INFO: Creating deployment "test-rollover-deployment" Feb 3 20:53:34.113: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 3 20:53:36.119: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 3 20:53:36.125: INFO: Ensure that both replica sets have 1 created replica Feb 3 20:53:36.130: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 3 20:53:36.136: INFO: Updating deployment test-rollover-deployment Feb 3 20:53:36.136: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 3 20:53:38.192: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 3 20:53:38.277: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 3 20:53:38.360: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:38.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982416, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:40.367: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:40.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982416, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:42.368: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:42.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982421, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:44.369: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:44.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982421, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:46.368: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:46.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982421, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:48.368: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:48.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982421, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:50.366: INFO: all replica sets need to contain the pod-template-hash label Feb 3 20:53:50.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982421, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747982414, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 20:53:52.707: INFO: Feb 3 20:53:52.707: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 20:53:52.741: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2415 /apis/apps/v1/namespaces/deployment-2415/deployments/test-rollover-deployment f7b60b17-25d8-4e00-9485-ac68d1f8231b 6375206 2 2021-02-03 20:53:34 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cfcdc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-03 20:53:34 +0000 UTC,LastTransitionTime:2021-02-03 20:53:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2021-02-03 20:53:52 +0000 UTC,LastTransitionTime:2021-02-03 20:53:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 20:53:52.745: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-2415 /apis/apps/v1/namespaces/deployment-2415/replicasets/test-rollover-deployment-574d6dfbff e076523d-8559-4035-9d14-8973761b50e8 6375189 2 2021-02-03 20:53:36 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment f7b60b17-25d8-4e00-9485-ac68d1f8231b 0xc001d77387 0xc001d77388}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d77418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 20:53:52.745: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 3 20:53:52.745: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2415 /apis/apps/v1/namespaces/deployment-2415/replicasets/test-rollover-controller 251e0f56-2874-48f3-ab5b-6e49573bcb43 6375204 2 2021-02-03 20:53:27 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment f7b60b17-25d8-4e00-9485-ac68d1f8231b 0xc001d77267 0xc001d77268}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001d772c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 20:53:52.745: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2415 /apis/apps/v1/namespaces/deployment-2415/replicasets/test-rollover-deployment-f6c94f66c ce39e59b-e70f-4c8b-bac1-5215ae80d8dc 6375128 2 2021-02-03 20:53:34 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment f7b60b17-25d8-4e00-9485-ac68d1f8231b 0xc001d77480 0xc001d77481}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d77508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 20:53:52.748: INFO: Pod "test-rollover-deployment-574d6dfbff-zk26t" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-zk26t test-rollover-deployment-574d6dfbff- deployment-2415 /api/v1/namespaces/deployment-2415/pods/test-rollover-deployment-574d6dfbff-zk26t 064031fc-9aa6-42e1-9e2c-40c9d9b0d087 6375158 0 2021-02-03 20:53:36 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff e076523d-8559-4035-9d14-8973761b50e8 0xc001d77ca7 0xc001d77ca8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ct52b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ct52b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ct52b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:53:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:53:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:53:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 20:53:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.206,StartTime:2021-02-03 20:53:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 20:53:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a886a15eb53e976675659ed22114e6ed325b1cc7b98b0edd66f97d57dd32d8e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:53:52.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2415" for this suite. • [SLOW TEST:25.803 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":22,"skipped":396,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:53:52.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 3 20:53:53.652: INFO: Waiting up to 5m0s for pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8" in namespace "emptydir-841" to be "success or failure" Feb 3 20:53:53.735: INFO: Pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 82.452139ms Feb 3 20:53:55.739: INFO: Pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086507545s Feb 3 20:53:57.742: INFO: Pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8": Phase="Running", Reason="", readiness=true. Elapsed: 4.089980517s Feb 3 20:53:59.746: INFO: Pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093824214s STEP: Saw pod success Feb 3 20:53:59.746: INFO: Pod "pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8" satisfied condition "success or failure" Feb 3 20:53:59.748: INFO: Trying to get logs from node jerma-worker2 pod pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8 container test-container: STEP: delete the pod Feb 3 20:53:59.786: INFO: Waiting for pod pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8 to disappear Feb 3 20:53:59.818: INFO: Pod pod-5d6d0cb3-ccf3-4c8a-bdcc-ec9658b25ab8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:53:59.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-841" for this suite. • [SLOW TEST:7.072 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":400,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:53:59.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:54:33.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5324" for this suite. STEP: Destroying namespace "nsdeletetest-3186" for this suite. Feb 3 20:54:33.204: INFO: Namespace nsdeletetest-3186 was already deleted STEP: Destroying namespace "nsdeletetest-747" for this suite. • [SLOW TEST:33.381 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":24,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:54:33.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0203 20:54:34.435669 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 20:54:34.435: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:54:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3887" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":25,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:54:34.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-30.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-30.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-30.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-30.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-30.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-30.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 20:54:42.670: INFO: DNS probes using dns-30/dns-test-0142dd53-33ef-4bc8-bc4d-a2ca2d223663 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:54:42.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-30" for this suite. • [SLOW TEST:8.303 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":26,"skipped":476,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:54:42.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1862, will wait for the garbage collector to delete the pods Feb 3 20:54:49.429: INFO: Deleting Job.batch foo took: 5.040111ms Feb 3 20:54:49.829: INFO: Terminating Job.batch foo pods took: 400.257455ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:55:32.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1862" for this suite. • [SLOW TEST:49.341 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":27,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:55:32.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2338 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2338 STEP: Creating statefulset with conflicting port in namespace statefulset-2338 STEP: Waiting until pod test-pod will start running in namespace statefulset-2338 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2338 Feb 3 20:55:38.287: INFO: Observed stateful pod in namespace: statefulset-2338, name: ss-0, uid: 7d93e7e1-cdd3-41cc-b488-5686eaff9027, status phase: Failed. Waiting for statefulset controller to delete. Feb 3 20:55:38.306: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2338 STEP: Removing pod with conflicting port in namespace statefulset-2338 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2338 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 3 20:55:42.357: INFO: Deleting all statefulset in ns statefulset-2338 Feb 3 20:55:42.360: INFO: Scaling statefulset ss to 0 Feb 3 20:55:52.404: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 20:55:52.413: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:55:52.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2338" for this suite. • [SLOW TEST:20.480 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":28,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:55:52.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 20:55:57.756: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:55:57.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5795" for this suite. • [SLOW TEST:5.191 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":563,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:55:57.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:56:32.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-830" for this suite. • [SLOW TEST:35.121 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:56:32.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:56:37.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8438" for this suite. • [SLOW TEST:5.079 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":31,"skipped":603,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:56:38.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:56:49.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9644" for this suite. • [SLOW TEST:11.143 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":32,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:56:49.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:57:09.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8616" for this suite. • [SLOW TEST:20.079 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":33,"skipped":659,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:57:09.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 20:57:09.314: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 20:57:09.332: INFO: Waiting for terminating namespaces to be deleted... Feb 3 20:57:09.337: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Feb 3 20:57:09.351: INFO: chaos-daemon-f2nl5 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 20:57:09.351: INFO: chaos-controller-manager-7f9bbd476f-2hzrh from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 20:57:09.351: INFO: fail-once-local-82ndd from job-8616 started at 2021-02-03 20:56:58 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container c ready: false, restart count 1 Feb 3 20:57:09.351: INFO: fail-once-local-48snr from job-8616 started at 2021-02-03 20:56:58 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container c ready: false, restart count 1 Feb 3 20:57:09.351: INFO: kindnet-c2jgb from kube-system started at 2021-01-10 17:30:25 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 20:57:09.351: INFO: fail-once-local-kvq55 from job-8616 started at 2021-02-03 20:56:49 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container c ready: false, restart count 1 Feb 3 20:57:09.351: INFO: kube-proxy-gdgm6 from kube-system started at 2021-01-10 17:29:37 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.351: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 20:57:09.351: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Feb 3 20:57:09.367: INFO: kube-proxy-8vfzd from kube-system started at 2021-01-10 17:29:16 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.367: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 20:57:09.367: INFO: chaos-daemon-n2277 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.367: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 20:57:09.367: INFO: kindnet-4ww4f from kube-system started at 2021-01-10 17:29:22 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.367: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 20:57:09.367: INFO: fail-once-local-8qbcf from job-8616 started at 2021-02-03 20:56:49 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.367: INFO: Container c ready: false, restart count 1 Feb 3 20:57:09.367: INFO: rally-9399d102-hqly3dw0 from c-rally-9399d102-toxlp88b started at 2021-02-03 20:56:50 +0000 UTC (1 container statuses recorded) Feb 3 20:57:09.367: INFO: Container rally-9399d102-hqly3dw0 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-344ba1f8-b59e-4024-978f-b60aa6a97b84 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-344ba1f8-b59e-4024-978f-b60aa6a97b84 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-344ba1f8-b59e-4024-978f-b60aa6a97b84 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:57:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5363" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.305 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":34,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:57:19.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 3 20:57:24.198: INFO: Successfully updated pod "annotationupdate88d3f500-c787-4d16-a714-e00f401588dc" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:57:28.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8883" for this suite. • [SLOW TEST:8.724 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":697,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:57:28.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Feb 3 20:57:28.380: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:57:42.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-960" for this suite. • [SLOW TEST:14.654 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":36,"skipped":707,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:57:42.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-n945 STEP: Creating a pod to test atomic-volume-subpath Feb 3 20:57:43.201: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n945" in namespace "subpath-9958" to be "success or failure" Feb 3 20:57:43.216: INFO: Pod "pod-subpath-test-secret-n945": Phase="Pending", Reason="", readiness=false. Elapsed: 15.16257ms Feb 3 20:57:45.220: INFO: Pod "pod-subpath-test-secret-n945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019099877s Feb 3 20:57:47.223: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 4.022192391s Feb 3 20:57:49.227: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 6.026548086s Feb 3 20:57:51.231: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 8.030397751s Feb 3 20:57:53.235: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 10.034507384s Feb 3 20:57:55.239: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 12.03825802s Feb 3 20:57:57.243: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 14.04246898s Feb 3 20:57:59.247: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 16.046521146s Feb 3 20:58:01.251: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 18.05026368s Feb 3 20:58:03.254: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 20.05362712s Feb 3 20:58:05.271: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 22.069950597s Feb 3 20:58:07.274: INFO: Pod "pod-subpath-test-secret-n945": Phase="Running", Reason="", readiness=true. Elapsed: 24.073447893s Feb 3 20:58:09.305: INFO: Pod "pod-subpath-test-secret-n945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.104533187s STEP: Saw pod success Feb 3 20:58:09.305: INFO: Pod "pod-subpath-test-secret-n945" satisfied condition "success or failure" Feb 3 20:58:09.309: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-n945 container test-container-subpath-secret-n945: STEP: delete the pod Feb 3 20:58:09.330: INFO: Waiting for pod pod-subpath-test-secret-n945 to disappear Feb 3 20:58:09.334: INFO: Pod pod-subpath-test-secret-n945 no longer exists STEP: Deleting pod pod-subpath-test-secret-n945 Feb 3 20:58:09.334: INFO: Deleting pod "pod-subpath-test-secret-n945" in namespace "subpath-9958" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:58:09.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9958" for this suite. • [SLOW TEST:26.422 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":37,"skipped":724,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:58:09.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-60fd658c-f78f-48e8-a341-f6bbd92dc805 STEP: Creating a pod to test consume secrets Feb 3 20:58:09.451: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d" in namespace "projected-4289" to be "success or failure" Feb 3 20:58:09.454: INFO: Pod "pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.330066ms Feb 3 20:58:11.516: INFO: Pod "pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065766091s Feb 3 20:58:13.521: INFO: Pod "pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069956179s STEP: Saw pod success Feb 3 20:58:13.521: INFO: Pod "pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d" satisfied condition "success or failure" Feb 3 20:58:13.524: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d container projected-secret-volume-test: STEP: delete the pod Feb 3 20:58:13.566: INFO: Waiting for pod pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d to disappear Feb 3 20:58:13.587: INFO: Pod pod-projected-secrets-99ffd3be-07fe-43fe-bf28-d9060adc7f2d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:58:13.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4289" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:58:13.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 20:58:21.725: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:21.762: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 20:58:23.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:23.767: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 20:58:25.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:25.767: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 20:58:27.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:27.767: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 20:58:29.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:29.767: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 20:58:31.762: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 20:58:31.767: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 20:58:31.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2148" for this suite. • [SLOW TEST:18.179 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":746,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 20:58:31.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 20:58:31.821: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 20:58:31.843: INFO: Waiting for terminating namespaces to be deleted... Feb 3 20:58:31.845: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Feb 3 20:58:31.851: INFO: kube-proxy-gdgm6 from kube-system started at 2021-01-10 17:29:37 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.851: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 20:58:31.851: INFO: chaos-daemon-f2nl5 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.851: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 20:58:31.851: INFO: chaos-controller-manager-7f9bbd476f-2hzrh from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.851: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 20:58:31.851: INFO: kindnet-c2jgb from kube-system started at 2021-01-10 17:30:25 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.851: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 20:58:31.851: INFO: pod-handle-http-request from container-lifecycle-hook-2148 started at 2021-02-03 20:58:13 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.851: INFO: Container pod-handle-http-request ready: true, restart count 0 Feb 3 20:58:31.851: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Feb 3 20:58:31.856: INFO: kube-proxy-8vfzd from kube-system started at 2021-01-10 17:29:16 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.856: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 20:58:31.856: INFO: rally-0c77b6d1-ifzn003y from c-rally-0c77b6d1-97ok2izd started at 2021-02-03 20:58:27 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.856: INFO: Container rally-0c77b6d1-ifzn003y ready: true, restart count 0 Feb 3 20:58:31.856: INFO: chaos-daemon-n2277 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.856: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 20:58:31.856: INFO: kindnet-4ww4f from kube-system started at 2021-01-10 17:29:22 +0000 UTC (1 container statuses recorded) Feb 3 20:58:31.856: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-51a17b71-8450-4134-b513-5bde393d9994 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-51a17b71-8450-4134-b513-5bde393d9994 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-51a17b71-8450-4134-b513-5bde393d9994 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:03:40.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8220" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.287 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":40,"skipped":768,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:03:40.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 3 21:03:44.222: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 3 21:03:54.390: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:03:54.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3334" for this suite. • [SLOW TEST:14.339 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":41,"skipped":768,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:03:54.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:03:54.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4773" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":42,"skipped":772,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:03:54.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0203 21:04:06.440008 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:04:06.440: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:06.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-840" for this suite. • [SLOW TEST:11.926 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":43,"skipped":774,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:06.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 3 21:04:08.164: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 3 21:04:10.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983048, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983048, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983048, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983048, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:04:13.212: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:04:13.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:14.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-96" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.468 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":44,"skipped":776,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:14.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[] Feb 3 21:04:15.445: INFO: Get endpoints failed (3.177536ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 3 21:04:16.449: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[] (1.006613573s elapsed) STEP: Creating pod pod1 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod1:[100]] Feb 3 21:04:19.637: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod1:[100]] (3.181747054s elapsed) STEP: Creating pod pod2 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod1:[100] pod2:[101]] Feb 3 21:04:23.738: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod1:[100] pod2:[101]] (4.097884543s elapsed) STEP: Deleting pod pod1 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[pod2:[101]] Feb 3 21:04:24.864: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[pod2:[101]] (1.121658764s elapsed) STEP: Deleting pod pod2 in namespace services-920 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-920 to expose endpoints map[] Feb 3 21:04:25.882: INFO: successfully validated that service multi-endpoint-test in namespace services-920 exposes endpoints map[] (1.014155712s elapsed) [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:25.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-920" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.052 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":45,"skipped":783,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:25.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d06a2f46-5659-4c20-822e-10b2a813d1e1 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d06a2f46-5659-4c20-822e-10b2a813d1e1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:34.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7946" for this suite. • [SLOW TEST:8.162 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:34.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e09a05ed-3726-4a0c-8c1b-a06954b76f6c STEP: Creating a pod to test consume secrets Feb 3 21:04:34.221: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b" in namespace "projected-4875" to be "success or failure" Feb 3 21:04:34.224: INFO: Pod "pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05108ms Feb 3 21:04:36.228: INFO: Pod "pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006900206s Feb 3 21:04:38.231: INFO: Pod "pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010828081s STEP: Saw pod success Feb 3 21:04:38.231: INFO: Pod "pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b" satisfied condition "success or failure" Feb 3 21:04:38.234: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b container projected-secret-volume-test: STEP: delete the pod Feb 3 21:04:38.366: INFO: Waiting for pod pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b to disappear Feb 3 21:04:38.440: INFO: Pod pod-projected-secrets-ea1bffc7-ded9-40ad-a69f-d05f09be980b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4875" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:38.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Feb 3 21:04:38.544: INFO: Waiting up to 5m0s for pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5" in namespace "var-expansion-4411" to be "success or failure" Feb 3 21:04:38.549: INFO: Pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911311ms Feb 3 21:04:40.553: INFO: Pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008461663s Feb 3 21:04:42.557: INFO: Pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5": Phase="Running", Reason="", readiness=true. Elapsed: 4.012174366s Feb 3 21:04:44.560: INFO: Pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015332123s STEP: Saw pod success Feb 3 21:04:44.560: INFO: Pod "var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5" satisfied condition "success or failure" Feb 3 21:04:44.562: INFO: Trying to get logs from node jerma-worker pod var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5 container dapi-container: STEP: delete the pod Feb 3 21:04:44.595: INFO: Waiting for pod var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5 to disappear Feb 3 21:04:44.609: INFO: Pod var-expansion-9330f136-a19a-409b-b55f-4ff2370140d5 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:44.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4411" for this suite. • [SLOW TEST:6.169 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":830,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:44.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f Feb 3 21:04:44.723: INFO: Pod name my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f: Found 0 pods out of 1 Feb 3 21:04:49.765: INFO: Pod name my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f: Found 1 pods out of 1 Feb 3 21:04:49.765: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f" are running Feb 3 21:04:49.775: INFO: Pod "my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f-tzq4s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:04:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:04:47 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:04:47 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:04:44 +0000 UTC Reason: Message:}]) Feb 3 21:04:49.775: INFO: Trying to dial the pod Feb 3 21:04:54.786: INFO: Controller my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f: Got expected result from replica 1 [my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f-tzq4s]: "my-hostname-basic-17ef240a-8639-4435-a0a6-48f1faa24d3f-tzq4s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:04:54.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6636" for this suite. • [SLOW TEST:10.176 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":49,"skipped":846,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:04:54.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 3 21:05:02.683: INFO: 10 pods remaining Feb 3 21:05:02.683: INFO: 10 pods has nil DeletionTimestamp Feb 3 21:05:02.683: INFO: Feb 3 21:05:03.976: INFO: 6 pods remaining Feb 3 21:05:03.976: INFO: 0 pods has nil DeletionTimestamp Feb 3 21:05:03.976: INFO: Feb 3 21:05:04.912: INFO: 0 pods remaining Feb 3 21:05:04.912: INFO: 0 pods has nil DeletionTimestamp Feb 3 21:05:04.912: INFO: STEP: Gathering metrics W0203 21:05:06.252191 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:05:06.252: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2130" for this suite. • [SLOW TEST:11.465 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":50,"skipped":854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:06.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:05:07.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d" in namespace "projected-9475" to be "success or failure" Feb 3 21:05:07.748: INFO: Pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 409.859745ms Feb 3 21:05:09.752: INFO: Pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413536463s Feb 3 21:05:11.872: INFO: Pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533855444s Feb 3 21:05:13.877: INFO: Pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.538369678s STEP: Saw pod success Feb 3 21:05:13.877: INFO: Pod "downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d" satisfied condition "success or failure" Feb 3 21:05:13.880: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d container client-container: STEP: delete the pod Feb 3 21:05:13.903: INFO: Waiting for pod downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d to disappear Feb 3 21:05:13.907: INFO: Pod downwardapi-volume-3c8bc1e6-dd9a-476a-9b51-576fee0e8a2d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:13.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9475" for this suite. • [SLOW TEST:7.690 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":926,"failed":0} [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:13.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 3 21:05:14.041: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379520 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 3 21:05:14.041: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379521 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 3 21:05:14.041: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379522 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 3 21:05:24.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379560 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:05:24.105: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379561 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 3 21:05:24.105: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8466 /api/v1/namespaces/watch-8466/configmaps/e2e-watch-test-label-changed db71bebe-ed6c-4dfd-9510-2e8dd03776a1 6379563 0 2021-02-03 21:05:14 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:24.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8466" for this suite. • [SLOW TEST:10.167 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":52,"skipped":926,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:24.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 3 21:05:29.323: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:29.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6745" for this suite. • [SLOW TEST:5.296 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":53,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:29.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-fb95ca29-cf0f-4116-9476-b3276008ac64 STEP: Creating secret with name secret-projected-all-test-volume-fda4b603-3128-43a9-9ce8-7746948de781 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 3 21:05:29.535: INFO: Waiting up to 5m0s for pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228" in namespace "projected-6505" to be "success or failure" Feb 3 21:05:29.563: INFO: Pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228": Phase="Pending", Reason="", readiness=false. Elapsed: 28.011594ms Feb 3 21:05:31.567: INFO: Pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031784569s Feb 3 21:05:33.571: INFO: Pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228": Phase="Running", Reason="", readiness=true. Elapsed: 4.035305494s Feb 3 21:05:35.638: INFO: Pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102231975s STEP: Saw pod success Feb 3 21:05:35.638: INFO: Pod "projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228" satisfied condition "success or failure" Feb 3 21:05:35.641: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228 container projected-all-volume-test: STEP: delete the pod Feb 3 21:05:35.694: INFO: Waiting for pod projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228 to disappear Feb 3 21:05:35.724: INFO: Pod projected-volume-5d29aaed-feb8-4682-998b-ada78d27c228 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:35.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6505" for this suite. • [SLOW TEST:6.318 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":54,"skipped":961,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:35.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:52.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9236" for this suite. • [SLOW TEST:17.208 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":55,"skipped":963,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:52.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:05:52.995: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 3 21:05:57.999: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 21:05:57.999: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 3 21:05:58.020: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4408 /apis/apps/v1/namespaces/deployment-4408/deployments/test-cleanup-deployment 3c2c3c8d-aeb2-45cf-b715-d6dcf972b1b4 6379771 1 2021-02-03 21:05:58 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005311508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 3 21:05:58.074: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4408 /apis/apps/v1/namespaces/deployment-4408/replicasets/test-cleanup-deployment-55ffc6b7b6 756d3e96-f8a4-4a36-8b63-612a38688d20 6379773 1 2021-02-03 21:05:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3c2c3c8d-aeb2-45cf-b715-d6dcf972b1b4 0xc002d45207 0xc002d45208}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002d45278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:05:58.074: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 3 21:05:58.074: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4408 /apis/apps/v1/namespaces/deployment-4408/replicasets/test-cleanup-controller 24702a0d-f909-45c1-baeb-4f9ff19d169c 6379772 1 2021-02-03 21:05:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 3c2c3c8d-aeb2-45cf-b715-d6dcf972b1b4 0xc002d45137 0xc002d45138}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d45198 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 21:05:58.078: INFO: Pod "test-cleanup-controller-tq8lq" is available: &Pod{ObjectMeta:{test-cleanup-controller-tq8lq test-cleanup-controller- deployment-4408 /api/v1/namespaces/deployment-4408/pods/test-cleanup-controller-tq8lq 0ab576c9-33bb-43ac-bbbf-6691895a9c9b 6379761 0 2021-02-03 21:05:52 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 24702a0d-f909-45c1-baeb-4f9ff19d169c 0xc0005036d7 0xc0005036d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n8d6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n8d6h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n8d6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:05:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:05:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:05:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.220,StartTime:2021-02-03 21:05:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 21:05:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dd23985c14916d848a1c55ae9fa947650d478a49fe5c77b5c81e3fbc0fec239e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 3 21:05:58.078: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-b7nb8" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-b7nb8 test-cleanup-deployment-55ffc6b7b6- deployment-4408 /api/v1/namespaces/deployment-4408/pods/test-cleanup-deployment-55ffc6b7b6-b7nb8 8d5aee72-98f0-434b-b622-1a2a28d6292a 6379776 0 2021-02-03 21:05:58 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 756d3e96-f8a4-4a36-8b63-612a38688d20 0xc000503977 0xc000503978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n8d6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n8d6h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n8d6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:05:58.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4408" for this suite. • [SLOW TEST:5.270 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":56,"skipped":969,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:05:58.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c98cae55-dbce-44a7-af61-f6e6a367101b STEP: Creating a pod to test consume configMaps Feb 3 21:05:58.331: INFO: Waiting up to 5m0s for pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9" in namespace "configmap-8162" to be "success or failure" Feb 3 21:05:58.342: INFO: Pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.836641ms Feb 3 21:06:00.346: INFO: Pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01488335s Feb 3 21:06:02.421: INFO: Pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089937958s Feb 3 21:06:04.449: INFO: Pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117798663s STEP: Saw pod success Feb 3 21:06:04.449: INFO: Pod "pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9" satisfied condition "success or failure" Feb 3 21:06:04.452: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9 container configmap-volume-test: STEP: delete the pod Feb 3 21:06:04.473: INFO: Waiting for pod pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9 to disappear Feb 3 21:06:04.506: INFO: Pod pod-configmaps-043c64b6-23cc-4820-a2e8-ba15adc905e9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:04.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8162" for this suite. • [SLOW TEST:6.303 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:04.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:06:05.002: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b2f1fbfe-6c07-46f5-a811-b06a8ab5012e", Controller:(*bool)(0xc002cac8d2), BlockOwnerDeletion:(*bool)(0xc002cac8d3)}} Feb 3 21:06:05.054: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"479554d6-9e91-4f28-8499-ad9ef09a63a9", Controller:(*bool)(0xc003c8443a), BlockOwnerDeletion:(*bool)(0xc003c8443b)}} Feb 3 21:06:05.066: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"78c42871-86b5-411d-bd48-c2855854d763", Controller:(*bool)(0xc002e043ca), BlockOwnerDeletion:(*bool)(0xc002e043cb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:10.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7930" for this suite. • [SLOW TEST:5.599 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":58,"skipped":1001,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:10.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b25e5834-4887-4907-9485-bbf7deac9804 STEP: Creating a pod to test consume configMaps Feb 3 21:06:10.213: INFO: Waiting up to 5m0s for pod "pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1" in namespace "configmap-2171" to be "success or failure" Feb 3 21:06:10.218: INFO: Pod "pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.50678ms Feb 3 21:06:12.282: INFO: Pod "pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068838768s Feb 3 21:06:14.286: INFO: Pod "pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073506778s STEP: Saw pod success Feb 3 21:06:14.286: INFO: Pod "pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1" satisfied condition "success or failure" Feb 3 21:06:14.289: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1 container configmap-volume-test: STEP: delete the pod Feb 3 21:06:14.323: INFO: Waiting for pod pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1 to disappear Feb 3 21:06:14.330: INFO: Pod pod-configmaps-5187d50b-1ba6-4a9c-82a0-f23ed02136b1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:14.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2171" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1017,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:14.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Feb 3 21:06:14.412: INFO: Waiting up to 5m0s for pod "client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120" in namespace "containers-6347" to be "success or failure" Feb 3 21:06:14.420: INFO: Pod "client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120": Phase="Pending", Reason="", readiness=false. Elapsed: 7.537447ms Feb 3 21:06:16.423: INFO: Pod "client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0112008s Feb 3 21:06:18.428: INFO: Pod "client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015731066s STEP: Saw pod success Feb 3 21:06:18.428: INFO: Pod "client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120" satisfied condition "success or failure" Feb 3 21:06:18.431: INFO: Trying to get logs from node jerma-worker2 pod client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120 container test-container: STEP: delete the pod Feb 3 21:06:18.449: INFO: Waiting for pod client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120 to disappear Feb 3 21:06:18.454: INFO: Pod client-containers-ba85d926-cd08-4dd2-a9b2-6275eb296120 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:18.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6347" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1017,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:18.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 3 21:06:18.557: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:18.634: INFO: Number of nodes with available pods: 0 Feb 3 21:06:18.634: INFO: Node jerma-worker is running more than one daemon pod Feb 3 21:06:19.749: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:19.753: INFO: Number of nodes with available pods: 0 Feb 3 21:06:19.753: INFO: Node jerma-worker is running more than one daemon pod Feb 3 21:06:20.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:20.643: INFO: Number of nodes with available pods: 0 Feb 3 21:06:20.643: INFO: Node jerma-worker is running more than one daemon pod Feb 3 21:06:21.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:21.644: INFO: Number of nodes with available pods: 0 Feb 3 21:06:21.644: INFO: Node jerma-worker is running more than one daemon pod Feb 3 21:06:22.645: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:22.648: INFO: Number of nodes with available pods: 2 Feb 3 21:06:22.648: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 3 21:06:22.670: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:22.673: INFO: Number of nodes with available pods: 1 Feb 3 21:06:22.673: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:23.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:23.680: INFO: Number of nodes with available pods: 1 Feb 3 21:06:23.680: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:24.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:24.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:24.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:25.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:25.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:25.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:26.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:26.680: INFO: Number of nodes with available pods: 1 Feb 3 21:06:26.680: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:27.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:27.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:27.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:28.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:28.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:28.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:29.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:29.682: INFO: Number of nodes with available pods: 1 Feb 3 21:06:29.682: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:30.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:30.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:30.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:31.696: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:31.700: INFO: Number of nodes with available pods: 1 Feb 3 21:06:31.700: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:32.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:32.681: INFO: Number of nodes with available pods: 1 Feb 3 21:06:32.681: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:33.965: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:33.968: INFO: Number of nodes with available pods: 1 Feb 3 21:06:33.968: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:34.707: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:34.710: INFO: Number of nodes with available pods: 1 Feb 3 21:06:34.710: INFO: Node jerma-worker2 is running more than one daemon pod Feb 3 21:06:35.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 21:06:35.682: INFO: Number of nodes with available pods: 2 Feb 3 21:06:35.682: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6759, will wait for the garbage collector to delete the pods Feb 3 21:06:35.742: INFO: Deleting DaemonSet.extensions daemon-set took: 5.250905ms Feb 3 21:06:36.142: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.330065ms Feb 3 21:06:42.162: INFO: Number of nodes with available pods: 0 Feb 3 21:06:42.162: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 21:06:42.165: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6759/daemonsets","resourceVersion":"6380118"},"items":null} Feb 3 21:06:42.167: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6759/pods","resourceVersion":"6380118"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:42.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6759" for this suite. • [SLOW TEST:23.709 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":61,"skipped":1026,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:42.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:06:42.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610" in namespace "downward-api-4357" to be "success or failure" Feb 3 21:06:42.259: INFO: Pod "downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680538ms Feb 3 21:06:44.263: INFO: Pod "downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007518199s Feb 3 21:06:46.267: INFO: Pod "downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0116486s STEP: Saw pod success Feb 3 21:06:46.267: INFO: Pod "downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610" satisfied condition "success or failure" Feb 3 21:06:46.270: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610 container client-container: STEP: delete the pod Feb 3 21:06:46.288: INFO: Waiting for pod downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610 to disappear Feb 3 21:06:46.330: INFO: Pod downwardapi-volume-c66980cd-acd9-4bfc-b680-2ccb271e9610 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:46.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4357" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:46.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:06:46.388: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 3 21:06:48.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 create -f -' Feb 3 21:06:52.709: INFO: stderr: "" Feb 3 21:06:52.709: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 3 21:06:52.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 delete e2e-test-crd-publish-openapi-4924-crds test-foo' Feb 3 21:06:52.852: INFO: stderr: "" Feb 3 21:06:52.852: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 3 21:06:52.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 apply -f -' Feb 3 21:06:53.092: INFO: stderr: "" Feb 3 21:06:53.092: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 3 21:06:53.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 delete e2e-test-crd-publish-openapi-4924-crds test-foo' Feb 3 21:06:53.220: INFO: stderr: "" Feb 3 21:06:53.220: INFO: stdout: "e2e-test-crd-publish-openapi-4924-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 3 21:06:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 create -f -' Feb 3 21:06:53.494: INFO: rc: 1 Feb 3 21:06:53.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 apply -f -' Feb 3 21:06:53.722: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 3 21:06:53.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 create -f -' Feb 3 21:06:54.010: INFO: rc: 1 Feb 3 21:06:54.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5047 apply -f -' Feb 3 21:06:54.250: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 3 21:06:54.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds' Feb 3 21:06:54.515: INFO: stderr: "" Feb 3 21:06:54.515: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4924-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 3 21:06:54.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds.metadata' Feb 3 21:06:54.781: INFO: stderr: "" Feb 3 21:06:54.781: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4924-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 3 21:06:54.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds.spec' Feb 3 21:06:55.022: INFO: stderr: "" Feb 3 21:06:55.022: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4924-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 3 21:06:55.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds.spec.bars' Feb 3 21:06:55.277: INFO: stderr: "" Feb 3 21:06:55.277: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4924-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 3 21:06:55.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4924-crds.spec.bars2' Feb 3 21:06:55.514: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:06:58.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5047" for this suite. • [SLOW TEST:12.099 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":63,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:06:58.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 3 21:06:58.523: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 3 21:07:10.100: INFO: >>> kubeConfig: /root/.kube/config Feb 3 21:07:13.037: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:07:23.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4724" for this suite. • [SLOW TEST:25.227 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":64,"skipped":1121,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:07:23.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 3 21:07:24.433: INFO: Pod name wrapped-volume-race-8d56da21-b753-4304-aea6-981965f15da2: Found 0 pods out of 5 Feb 3 21:07:29.442: INFO: Pod name wrapped-volume-race-8d56da21-b753-4304-aea6-981965f15da2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8d56da21-b753-4304-aea6-981965f15da2 in namespace emptydir-wrapper-861, will wait for the garbage collector to delete the pods Feb 3 21:07:45.525: INFO: Deleting ReplicationController wrapped-volume-race-8d56da21-b753-4304-aea6-981965f15da2 took: 9.256177ms Feb 3 21:07:45.926: INFO: Terminating ReplicationController wrapped-volume-race-8d56da21-b753-4304-aea6-981965f15da2 pods took: 400.295375ms STEP: Creating RC which spawns configmap-volume pods Feb 3 21:08:02.603: INFO: Pod name wrapped-volume-race-9bfc9826-de96-45bc-a6bf-70f25c8d03c7: Found 0 pods out of 5 Feb 3 21:08:07.609: INFO: Pod name wrapped-volume-race-9bfc9826-de96-45bc-a6bf-70f25c8d03c7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9bfc9826-de96-45bc-a6bf-70f25c8d03c7 in namespace emptydir-wrapper-861, will wait for the garbage collector to delete the pods Feb 3 21:08:23.718: INFO: Deleting ReplicationController wrapped-volume-race-9bfc9826-de96-45bc-a6bf-70f25c8d03c7 took: 7.284885ms Feb 3 21:08:24.118: INFO: Terminating ReplicationController wrapped-volume-race-9bfc9826-de96-45bc-a6bf-70f25c8d03c7 pods took: 400.271237ms STEP: Creating RC which spawns configmap-volume pods Feb 3 21:08:32.578: INFO: Pod name wrapped-volume-race-cec222d3-3d7e-405d-b9b9-da06279adfed: Found 0 pods out of 5 Feb 3 21:08:37.589: INFO: Pod name wrapped-volume-race-cec222d3-3d7e-405d-b9b9-da06279adfed: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cec222d3-3d7e-405d-b9b9-da06279adfed in namespace emptydir-wrapper-861, will wait for the garbage collector to delete the pods Feb 3 21:08:52.300: INFO: Deleting ReplicationController wrapped-volume-race-cec222d3-3d7e-405d-b9b9-da06279adfed took: 7.117054ms Feb 3 21:08:52.700: INFO: Terminating ReplicationController wrapped-volume-race-cec222d3-3d7e-405d-b9b9-da06279adfed pods took: 400.279189ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:03.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-861" for this suite. • [SLOW TEST:99.539 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":65,"skipped":1137,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:03.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:09:03.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e" in namespace "downward-api-1711" to be "success or failure" Feb 3 21:09:03.463: INFO: Pod "downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e": Phase="Pending", Reason="", readiness=false. Elapsed: 64.596197ms Feb 3 21:09:05.727: INFO: Pod "downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328328666s Feb 3 21:09:07.731: INFO: Pod "downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332195575s STEP: Saw pod success Feb 3 21:09:07.731: INFO: Pod "downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e" satisfied condition "success or failure" Feb 3 21:09:07.733: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e container client-container: STEP: delete the pod Feb 3 21:09:07.787: INFO: Waiting for pod downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e to disappear Feb 3 21:09:07.792: INFO: Pod downwardapi-volume-d552e418-27ae-485f-82ed-0079a0d1cf1e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1711" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1141,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:07.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 21:09:08.124: INFO: Waiting up to 5m0s for pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe" in namespace "emptydir-9352" to be "success or failure" Feb 3 21:09:08.133: INFO: Pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.629537ms Feb 3 21:09:10.146: INFO: Pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022539672s Feb 3 21:09:12.157: INFO: Pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032753308s Feb 3 21:09:14.160: INFO: Pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036207249s STEP: Saw pod success Feb 3 21:09:14.160: INFO: Pod "pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe" satisfied condition "success or failure" Feb 3 21:09:14.162: INFO: Trying to get logs from node jerma-worker2 pod pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe container test-container: STEP: delete the pod Feb 3 21:09:14.194: INFO: Waiting for pod pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe to disappear Feb 3 21:09:14.198: INFO: Pod pod-ed264606-6cb2-47e1-86d1-bd51d5264dbe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:14.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9352" for this suite. • [SLOW TEST:6.405 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1152,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:14.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:09:14.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:09:16.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:09:18.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983354, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:09:21.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 3 21:09:21.818: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:21.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9776" for this suite. STEP: Destroying namespace "webhook-9776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.744 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":68,"skipped":1154,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:21.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:09:22.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51" in namespace "downward-api-14" to be "success or failure" Feb 3 21:09:22.026: INFO: Pod "downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51": Phase="Pending", Reason="", readiness=false. Elapsed: 22.287917ms Feb 3 21:09:24.038: INFO: Pod "downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034112865s Feb 3 21:09:26.042: INFO: Pod "downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037828622s STEP: Saw pod success Feb 3 21:09:26.042: INFO: Pod "downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51" satisfied condition "success or failure" Feb 3 21:09:26.044: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51 container client-container: STEP: delete the pod Feb 3 21:09:26.104: INFO: Waiting for pod downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51 to disappear Feb 3 21:09:26.107: INFO: Pod downwardapi-volume-d6957302-47dc-417e-9781-def6a0144d51 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:26.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-14" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:26.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9631f303-15ea-4a57-88d3-ee4444966274 STEP: Creating a pod to test consume secrets Feb 3 21:09:26.232: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e" in namespace "projected-3568" to be "success or failure" Feb 3 21:09:26.247: INFO: Pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.471789ms Feb 3 21:09:28.259: INFO: Pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026564616s Feb 3 21:09:30.262: INFO: Pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030203879s Feb 3 21:09:32.269: INFO: Pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036607346s STEP: Saw pod success Feb 3 21:09:32.269: INFO: Pod "pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e" satisfied condition "success or failure" Feb 3 21:09:32.271: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e container projected-secret-volume-test: STEP: delete the pod Feb 3 21:09:32.290: INFO: Waiting for pod pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e to disappear Feb 3 21:09:32.295: INFO: Pod pod-projected-secrets-9105088a-624e-4d8f-961f-3a27bad7e12e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:32.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3568" for this suite. • [SLOW TEST:6.187 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1210,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:32.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5c755689-60a6-44f5-953a-8db544166ed4 STEP: Creating a pod to test consume secrets Feb 3 21:09:32.363: INFO: Waiting up to 5m0s for pod "pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2" in namespace "secrets-6990" to be "success or failure" Feb 3 21:09:32.404: INFO: Pod "pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2": Phase="Pending", Reason="", readiness=false. Elapsed: 41.094529ms Feb 3 21:09:34.408: INFO: Pod "pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045429911s Feb 3 21:09:36.412: INFO: Pod "pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049433606s STEP: Saw pod success Feb 3 21:09:36.412: INFO: Pod "pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2" satisfied condition "success or failure" Feb 3 21:09:36.415: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2 container secret-volume-test: STEP: delete the pod Feb 3 21:09:36.436: INFO: Waiting for pod pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2 to disappear Feb 3 21:09:36.457: INFO: Pod pod-secrets-6df0a364-eaa5-44fc-884f-8aa3912c70a2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:36.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6990" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1220,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:36.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 3 21:09:36.537: INFO: Created pod &Pod{ObjectMeta:{dns-2842 dns-2842 /api/v1/namespaces/dns-2842/pods/dns-2842 b426fb78-64d7-41e4-af32-547af8137979 6381722 0 2021-02-03 21:09:36 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b4d4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b4d4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b4d4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Feb 3 21:09:40.558: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2842 PodName:dns-2842 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:09:40.558: INFO: >>> kubeConfig: /root/.kube/config I0203 21:09:40.587418 6 log.go:172] (0xc002506160) (0xc000b30000) Create stream I0203 21:09:40.587449 6 log.go:172] (0xc002506160) (0xc000b30000) Stream added, broadcasting: 1 I0203 21:09:40.590060 6 log.go:172] (0xc002506160) Reply frame received for 1 I0203 21:09:40.590105 6 log.go:172] (0xc002506160) (0xc0011628c0) Create stream I0203 21:09:40.590120 6 log.go:172] (0xc002506160) (0xc0011628c0) Stream added, broadcasting: 3 I0203 21:09:40.591217 6 log.go:172] (0xc002506160) Reply frame received for 3 I0203 21:09:40.591256 6 log.go:172] (0xc002506160) (0xc001162dc0) Create stream I0203 21:09:40.591286 6 log.go:172] (0xc002506160) (0xc001162dc0) Stream added, broadcasting: 5 I0203 21:09:40.592698 6 log.go:172] (0xc002506160) Reply frame received for 5 I0203 21:09:40.701260 6 log.go:172] (0xc002506160) Data frame received for 3 I0203 21:09:40.701288 6 log.go:172] (0xc0011628c0) (3) Data frame handling I0203 21:09:40.701312 6 log.go:172] (0xc0011628c0) (3) Data frame sent I0203 21:09:40.702738 6 log.go:172] (0xc002506160) Data frame received for 3 I0203 21:09:40.702754 6 log.go:172] (0xc0011628c0) (3) Data frame handling I0203 21:09:40.702789 6 log.go:172] (0xc002506160) Data frame received for 5 I0203 21:09:40.702824 6 log.go:172] (0xc001162dc0) (5) Data frame handling I0203 21:09:40.704477 6 log.go:172] (0xc002506160) Data frame received for 1 I0203 21:09:40.704529 6 log.go:172] (0xc000b30000) (1) Data frame handling I0203 21:09:40.704558 6 log.go:172] (0xc000b30000) (1) Data frame sent I0203 21:09:40.704581 6 log.go:172] (0xc002506160) (0xc000b30000) Stream removed, broadcasting: 1 I0203 21:09:40.704602 6 log.go:172] (0xc002506160) Go away received I0203 21:09:40.705075 6 log.go:172] (0xc002506160) (0xc000b30000) Stream removed, broadcasting: 1 I0203 21:09:40.705103 6 log.go:172] (0xc002506160) (0xc0011628c0) Stream removed, broadcasting: 3 I0203 21:09:40.705122 6 log.go:172] (0xc002506160) (0xc001162dc0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 3 21:09:40.705: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2842 PodName:dns-2842 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:09:40.705: INFO: >>> kubeConfig: /root/.kube/config I0203 21:09:40.738800 6 log.go:172] (0xc002d944d0) (0xc000aeed20) Create stream I0203 21:09:40.738823 6 log.go:172] (0xc002d944d0) (0xc000aeed20) Stream added, broadcasting: 1 I0203 21:09:40.741045 6 log.go:172] (0xc002d944d0) Reply frame received for 1 I0203 21:09:40.741083 6 log.go:172] (0xc002d944d0) (0xc001162fa0) Create stream I0203 21:09:40.741097 6 log.go:172] (0xc002d944d0) (0xc001162fa0) Stream added, broadcasting: 3 I0203 21:09:40.742093 6 log.go:172] (0xc002d944d0) Reply frame received for 3 I0203 21:09:40.742113 6 log.go:172] (0xc002d944d0) (0xc000aeee60) Create stream I0203 21:09:40.742120 6 log.go:172] (0xc002d944d0) (0xc000aeee60) Stream added, broadcasting: 5 I0203 21:09:40.743252 6 log.go:172] (0xc002d944d0) Reply frame received for 5 I0203 21:09:40.805898 6 log.go:172] (0xc002d944d0) Data frame received for 3 I0203 21:09:40.805944 6 log.go:172] (0xc001162fa0) (3) Data frame handling I0203 21:09:40.805970 6 log.go:172] (0xc001162fa0) (3) Data frame sent I0203 21:09:40.807514 6 log.go:172] (0xc002d944d0) Data frame received for 3 I0203 21:09:40.807541 6 log.go:172] (0xc001162fa0) (3) Data frame handling I0203 21:09:40.807585 6 log.go:172] (0xc002d944d0) Data frame received for 5 I0203 21:09:40.807616 6 log.go:172] (0xc000aeee60) (5) Data frame handling I0203 21:09:40.809034 6 log.go:172] (0xc002d944d0) Data frame received for 1 I0203 21:09:40.809057 6 log.go:172] (0xc000aeed20) (1) Data frame handling I0203 21:09:40.809072 6 log.go:172] (0xc000aeed20) (1) Data frame sent I0203 21:09:40.809089 6 log.go:172] (0xc002d944d0) (0xc000aeed20) Stream removed, broadcasting: 1 I0203 21:09:40.809104 6 log.go:172] (0xc002d944d0) Go away received I0203 21:09:40.809232 6 log.go:172] (0xc002d944d0) (0xc000aeed20) Stream removed, broadcasting: 1 I0203 21:09:40.809277 6 log.go:172] (0xc002d944d0) (0xc001162fa0) Stream removed, broadcasting: 3 I0203 21:09:40.809306 6 log.go:172] (0xc002d944d0) (0xc000aeee60) Stream removed, broadcasting: 5 Feb 3 21:09:40.809: INFO: Deleting pod dns-2842... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:40.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2842" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":72,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:40.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:41.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6150" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1252,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:41.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b0f543fe-4e5c-402d-bc30-6f6d8f251d78 STEP: Creating a pod to test consume secrets Feb 3 21:09:41.489: INFO: Waiting up to 5m0s for pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f" in namespace "secrets-9339" to be "success or failure" Feb 3 21:09:41.530: INFO: Pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.492294ms Feb 3 21:09:43.571: INFO: Pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082087578s Feb 3 21:09:45.576: INFO: Pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f": Phase="Running", Reason="", readiness=true. Elapsed: 4.086419009s Feb 3 21:09:47.579: INFO: Pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089329774s STEP: Saw pod success Feb 3 21:09:47.579: INFO: Pod "pod-secrets-3c267177-9845-42cf-8541-83e93528c63f" satisfied condition "success or failure" Feb 3 21:09:47.581: INFO: Trying to get logs from node jerma-worker pod pod-secrets-3c267177-9845-42cf-8541-83e93528c63f container secret-volume-test: STEP: delete the pod Feb 3 21:09:47.633: INFO: Waiting for pod pod-secrets-3c267177-9845-42cf-8541-83e93528c63f to disappear Feb 3 21:09:47.642: INFO: Pod pod-secrets-3c267177-9845-42cf-8541-83e93528c63f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:47.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9339" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1253,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:47.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:09:47.683: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 3 21:09:50.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3720 create -f -' Feb 3 21:09:54.119: INFO: stderr: "" Feb 3 21:09:54.119: INFO: stdout: "e2e-test-crd-publish-openapi-7541-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 3 21:09:54.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3720 delete e2e-test-crd-publish-openapi-7541-crds test-cr' Feb 3 21:09:54.223: INFO: stderr: "" Feb 3 21:09:54.223: INFO: stdout: "e2e-test-crd-publish-openapi-7541-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 3 21:09:54.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3720 apply -f -' Feb 3 21:09:54.478: INFO: stderr: "" Feb 3 21:09:54.478: INFO: stdout: "e2e-test-crd-publish-openapi-7541-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 3 21:09:54.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3720 delete e2e-test-crd-publish-openapi-7541-crds test-cr' Feb 3 21:09:54.619: INFO: stderr: "" Feb 3 21:09:54.619: INFO: stdout: "e2e-test-crd-publish-openapi-7541-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 3 21:09:54.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7541-crds' Feb 3 21:09:54.852: INFO: stderr: "" Feb 3 21:09:54.852: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7541-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:09:57.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3720" for this suite. • [SLOW TEST:10.128 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":75,"skipped":1260,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:09:57.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 3 21:10:05.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:05.991: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:07.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:07.995: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:09.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:09.994: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:11.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:11.995: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:13.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:13.995: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:15.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:15.995: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:17.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:17.996: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:19.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:19.995: INFO: Pod pod-with-prestop-exec-hook still exists Feb 3 21:10:21.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 3 21:10:21.996: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:10:22.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2651" for this suite. • [SLOW TEST:24.233 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1275,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:10:22.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:10:22.596: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:10:24.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983422, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983422, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983422, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983422, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:10:27.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:10:27.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9699-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:10:28.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9120" for this suite. STEP: Destroying namespace "webhook-9120-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.911 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":77,"skipped":1286,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:10:28.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:10:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-512" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1304,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:10:33.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-116.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-116.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-116.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-116.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:10:39.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.327: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.330: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.342: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.345: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.348: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.351: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:39.357: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:10:44.362: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.370: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.373: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.375: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.382: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.384: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.387: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.389: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:44.395: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:10:49.361: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.365: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.369: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.372: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.386: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.389: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.391: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.394: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:49.398: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:10:54.394: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.397: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.400: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.403: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.412: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.415: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.418: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.421: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:54.428: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:10:59.361: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.365: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.368: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.370: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.383: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.387: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.390: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.393: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:10:59.398: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:11:04.361: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.365: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.369: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.372: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.383: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.386: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.394: INFO: Unable to read jessie_udp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.399: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:04.404: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local wheezy_udp@dns-test-service-2.dns-116.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-116.svc.cluster.local jessie_udp@dns-test-service-2.dns-116.svc.cluster.local jessie_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:11:09.372: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local from pod dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3: the server could not find the requested resource (get pods dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3) Feb 3 21:11:09.397: INFO: Lookups using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 failed for: [wheezy_tcp@dns-test-service-2.dns-116.svc.cluster.local] Feb 3 21:11:14.395: INFO: DNS probes using dns-116/dns-test-e5435c60-7bd3-4983-87d0-cddf0e9231e3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:14.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-116" for this suite. • [SLOW TEST:41.936 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":79,"skipped":1319,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:15.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 3 21:11:21.869: INFO: Successfully updated pod "pod-update-activedeadlineseconds-61b1b36e-388a-4ff0-964e-dff22e78f954" Feb 3 21:11:21.869: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-61b1b36e-388a-4ff0-964e-dff22e78f954" in namespace "pods-3663" to be "terminated due to deadline exceeded" Feb 3 21:11:21.883: INFO: Pod "pod-update-activedeadlineseconds-61b1b36e-388a-4ff0-964e-dff22e78f954": Phase="Running", Reason="", readiness=true. Elapsed: 14.19009ms Feb 3 21:11:23.887: INFO: Pod "pod-update-activedeadlineseconds-61b1b36e-388a-4ff0-964e-dff22e78f954": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018221288s Feb 3 21:11:23.887: INFO: Pod "pod-update-activedeadlineseconds-61b1b36e-388a-4ff0-964e-dff22e78f954" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:23.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3663" for this suite. • [SLOW TEST:8.863 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1327,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:23.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-skrtx in namespace proxy-5280 I0203 21:11:24.011077 6 runners.go:189] Created replication controller with name: proxy-service-skrtx, namespace: proxy-5280, replica count: 1 I0203 21:11:25.061555 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:11:26.061775 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:11:27.062071 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:11:28.062360 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:29.062585 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:30.062819 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:31.063176 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:32.063444 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:33.063684 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:34.063970 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:35.064240 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 21:11:36.064462 6 runners.go:189] proxy-service-skrtx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 21:11:36.067: INFO: setup took 12.094208546s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 3 21:11:36.075: INFO: (0) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 7.69959ms) Feb 3 21:11:36.076: INFO: (0) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 8.445777ms) Feb 3 21:11:36.078: INFO: (0) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 10.205869ms) Feb 3 21:11:36.078: INFO: (0) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 10.328152ms) Feb 3 21:11:36.078: INFO: (0) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 10.731451ms) Feb 3 21:11:36.079: INFO: (0) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 10.961739ms) Feb 3 21:11:36.081: INFO: (0) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 13.595044ms) Feb 3 21:11:36.082: INFO: (0) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 13.853341ms) Feb 3 21:11:36.082: INFO: (0) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 14.013125ms) Feb 3 21:11:36.082: INFO: (0) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 13.97707ms) Feb 3 21:11:36.082: INFO: (0) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 14.185212ms) Feb 3 21:11:36.084: INFO: (0) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 16.411346ms) Feb 3 21:11:36.084: INFO: (0) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 3.558002ms) Feb 3 21:11:36.088: INFO: (1) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 3.643764ms) Feb 3 21:11:36.088: INFO: (1) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 5.096608ms) Feb 3 21:11:36.089: INFO: (1) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 5.188352ms) Feb 3 21:11:36.089: INFO: (1) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 5.160017ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 5.251353ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.555354ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 5.196767ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 5.157697ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 5.19894ms) Feb 3 21:11:36.090: INFO: (1) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 5.316491ms) Feb 3 21:11:36.092: INFO: (2) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 2.215537ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.768389ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.802257ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.836419ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.303681ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.480866ms) Feb 3 21:11:36.094: INFO: (2) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 4.554657ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 4.834674ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.761499ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.961735ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 5.022601ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 5.061766ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 5.06195ms) Feb 3 21:11:36.095: INFO: (2) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 5.615812ms) Feb 3 21:11:36.098: INFO: (3) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 2.701058ms) Feb 3 21:11:36.098: INFO: (3) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 2.899509ms) Feb 3 21:11:36.098: INFO: (3) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 2.98068ms) Feb 3 21:11:36.098: INFO: (3) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.083935ms) Feb 3 21:11:36.098: INFO: (3) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 3.055017ms) Feb 3 21:11:36.099: INFO: (3) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.213869ms) Feb 3 21:11:36.099: INFO: (3) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.221862ms) Feb 3 21:11:36.099: INFO: (3) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.450691ms) Feb 3 21:11:36.099: INFO: (3) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 2.803337ms) Feb 3 21:11:36.104: INFO: (4) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 2.875975ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 4.82404ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.844452ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 4.856813ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.931193ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.933732ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.962766ms) Feb 3 21:11:36.106: INFO: (4) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.96884ms) Feb 3 21:11:36.107: INFO: (4) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 6.175332ms) Feb 3 21:11:36.108: INFO: (4) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 7.053567ms) Feb 3 21:11:36.108: INFO: (4) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 7.087576ms) Feb 3 21:11:36.108: INFO: (4) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 7.043725ms) Feb 3 21:11:36.108: INFO: (4) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 7.123002ms) Feb 3 21:11:36.108: INFO: (4) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 7.064631ms) Feb 3 21:11:36.111: INFO: (5) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 2.440213ms) Feb 3 21:11:36.111: INFO: (5) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 2.393589ms) Feb 3 21:11:36.111: INFO: (5) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.104432ms) Feb 3 21:11:36.112: INFO: (5) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 3.422674ms) Feb 3 21:11:36.112: INFO: (5) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.504074ms) Feb 3 21:11:36.112: INFO: (5) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.73226ms) Feb 3 21:11:36.112: INFO: (5) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.006961ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.510143ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 4.498761ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.460746ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.639654ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 4.740551ms) Feb 3 21:11:36.113: INFO: (5) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 4.737376ms) Feb 3 21:11:36.115: INFO: (6) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 1.971586ms) Feb 3 21:11:36.115: INFO: (6) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.201342ms) Feb 3 21:11:36.117: INFO: (6) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.384627ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.688874ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 4.726513ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.851694ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.892233ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 4.947643ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 5.076737ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 5.185066ms) Feb 3 21:11:36.118: INFO: (6) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 5.155268ms) Feb 3 21:11:36.119: INFO: (6) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 5.54578ms) Feb 3 21:11:36.119: INFO: (6) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 5.524764ms) Feb 3 21:11:36.119: INFO: (6) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 5.528622ms) Feb 3 21:11:36.119: INFO: (6) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 5.5684ms) Feb 3 21:11:36.121: INFO: (7) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 1.989934ms) Feb 3 21:11:36.122: INFO: (7) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.3084ms) Feb 3 21:11:36.122: INFO: (7) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.2435ms) Feb 3 21:11:36.122: INFO: (7) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 3.368424ms) Feb 3 21:11:36.122: INFO: (7) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 4.96357ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 5.094839ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 5.143253ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 5.096732ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 5.29648ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 5.317167ms) Feb 3 21:11:36.124: INFO: (7) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 5.470115ms) Feb 3 21:11:36.128: INFO: (8) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.374286ms) Feb 3 21:11:36.132: INFO: (8) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 7.49811ms) Feb 3 21:11:36.132: INFO: (8) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 7.908117ms) Feb 3 21:11:36.132: INFO: (8) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 7.829559ms) Feb 3 21:11:36.132: INFO: (8) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 7.802757ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 8.32394ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 8.51823ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 8.348397ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 8.43276ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 8.468941ms) Feb 3 21:11:36.133: INFO: (8) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 3.971827ms) Feb 3 21:11:36.138: INFO: (9) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.964103ms) Feb 3 21:11:36.138: INFO: (9) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.774327ms) Feb 3 21:11:36.138: INFO: (9) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.89448ms) Feb 3 21:11:36.139: INFO: (9) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.283265ms) Feb 3 21:11:36.139: INFO: (9) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.865296ms) Feb 3 21:11:36.139: INFO: (9) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.73788ms) Feb 3 21:11:36.139: INFO: (9) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.695538ms) Feb 3 21:11:36.139: INFO: (9) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 2.99826ms) Feb 3 21:11:36.142: INFO: (10) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.983184ms) Feb 3 21:11:36.142: INFO: (10) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.052888ms) Feb 3 21:11:36.142: INFO: (10) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 3.149064ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.608351ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.728705ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 3.806937ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.814393ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 3.856216ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 3.825691ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.976263ms) Feb 3 21:11:36.143: INFO: (10) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.207405ms) Feb 3 21:11:36.144: INFO: (10) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 4.56526ms) Feb 3 21:11:36.144: INFO: (10) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.637734ms) Feb 3 21:11:36.144: INFO: (10) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.61985ms) Feb 3 21:11:36.147: INFO: (11) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 3.070922ms) Feb 3 21:11:36.147: INFO: (11) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 3.279951ms) Feb 3 21:11:36.147: INFO: (11) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 3.265558ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 3.780799ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.809203ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 3.7845ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.793231ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.854141ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.902266ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 3.9424ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 3.915052ms) Feb 3 21:11:36.148: INFO: (11) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 3.910613ms) Feb 3 21:11:36.149: INFO: (11) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.609717ms) Feb 3 21:11:36.152: INFO: (12) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.805129ms) Feb 3 21:11:36.152: INFO: (12) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 2.932653ms) Feb 3 21:11:36.152: INFO: (12) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 3.153855ms) Feb 3 21:11:36.152: INFO: (12) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.269552ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.032432ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.030234ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 4.153288ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 4.173662ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.41361ms) Feb 3 21:11:36.153: INFO: (12) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 4.166498ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.141397ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.128979ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.153593ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.285478ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.180399ms) Feb 3 21:11:36.158: INFO: (13) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: ... (200; 4.186579ms) Feb 3 21:11:36.159: INFO: (13) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 4.430174ms) Feb 3 21:11:36.159: INFO: (13) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 4.372919ms) Feb 3 21:11:36.159: INFO: (13) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.34202ms) Feb 3 21:11:36.161: INFO: (14) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 2.596168ms) Feb 3 21:11:36.161: INFO: (14) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 2.844379ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.861423ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.91078ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 2.936231ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 2.920708ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 2.988311ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 2.937846ms) Feb 3 21:11:36.162: INFO: (14) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.159185ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 3.808332ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.025205ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 4.121821ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 4.090807ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 4.095399ms) Feb 3 21:11:36.163: INFO: (14) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.138516ms) Feb 3 21:11:36.165: INFO: (15) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 2.29248ms) Feb 3 21:11:36.165: INFO: (15) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 2.407137ms) Feb 3 21:11:36.165: INFO: (15) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 2.417067ms) Feb 3 21:11:36.166: INFO: (15) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 2.640354ms) Feb 3 21:11:36.166: INFO: (15) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 2.73395ms) Feb 3 21:11:36.166: INFO: (15) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 2.683116ms) Feb 3 21:11:36.166: INFO: (15) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: ... (200; 2.864037ms) Feb 3 21:11:36.170: INFO: (16) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.002905ms) Feb 3 21:11:36.170: INFO: (16) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 11.844618ms) Feb 3 21:11:36.179: INFO: (16) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 11.872368ms) Feb 3 21:11:36.179: INFO: (16) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 11.922765ms) Feb 3 21:11:36.179: INFO: (16) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 11.954336ms) Feb 3 21:11:36.179: INFO: (16) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 11.916162ms) Feb 3 21:11:36.179: INFO: (16) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 12.135455ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 3.18796ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 3.278232ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname2/proxy/: tls qux (200; 3.347522ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 3.309756ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 3.311686ms) Feb 3 21:11:36.182: INFO: (17) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 3.317986ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 3.557244ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 3.759444ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:462/proxy/: tls qux (200; 4.135118ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.315576ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 4.33512ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.36585ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:160/proxy/: foo (200; 4.428205ms) Feb 3 21:11:36.183: INFO: (17) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test (200; 4.382383ms) Feb 3 21:11:36.188: INFO: (18) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:1080/proxy/: ... (200; 4.366669ms) Feb 3 21:11:36.188: INFO: (18) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname2/proxy/: bar (200; 4.434789ms) Feb 3 21:11:36.188: INFO: (18) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:460/proxy/: tls baz (200; 4.841075ms) Feb 3 21:11:36.189: INFO: (18) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:1080/proxy/: test<... (200; 4.866617ms) Feb 3 21:11:36.189: INFO: (18) /api/v1/namespaces/proxy-5280/pods/http:proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.912031ms) Feb 3 21:11:36.189: INFO: (18) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 4.911377ms) Feb 3 21:11:36.189: INFO: (18) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname1/proxy/: foo (200; 4.945724ms) Feb 3 21:11:36.189: INFO: (18) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: ... (200; 4.023725ms) Feb 3 21:11:36.193: INFO: (19) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws:162/proxy/: bar (200; 4.149292ms) Feb 3 21:11:36.193: INFO: (19) /api/v1/namespaces/proxy-5280/pods/https:proxy-service-skrtx-kd4ws:443/proxy/: test<... (200; 4.881032ms) Feb 3 21:11:36.194: INFO: (19) /api/v1/namespaces/proxy-5280/pods/proxy-service-skrtx-kd4ws/proxy/: test (200; 4.899544ms) Feb 3 21:11:36.194: INFO: (19) /api/v1/namespaces/proxy-5280/services/proxy-service-skrtx:portname2/proxy/: bar (200; 5.005301ms) Feb 3 21:11:36.194: INFO: (19) /api/v1/namespaces/proxy-5280/services/http:proxy-service-skrtx:portname1/proxy/: foo (200; 5.016102ms) Feb 3 21:11:36.194: INFO: (19) /api/v1/namespaces/proxy-5280/services/https:proxy-service-skrtx:tlsportname1/proxy/: tls baz (200; 5.093195ms) STEP: deleting ReplicationController proxy-service-skrtx in namespace proxy-5280, will wait for the garbage collector to delete the pods Feb 3 21:11:36.254: INFO: Deleting ReplicationController proxy-service-skrtx took: 8.606913ms Feb 3 21:11:36.655: INFO: Terminating ReplicationController proxy-service-skrtx pods took: 400.311841ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:42.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5280" for this suite. • [SLOW TEST:18.267 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":81,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:42.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-fa130df7-a953-44c2-a1aa-9a2a15391ce6 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-150" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":82,"skipped":1392,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:42.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:11:42.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65" in namespace "downward-api-654" to be "success or failure" Feb 3 21:11:42.334: INFO: Pod "downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65": Phase="Pending", Reason="", readiness=false. Elapsed: 5.184621ms Feb 3 21:11:44.352: INFO: Pod "downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022956414s Feb 3 21:11:46.356: INFO: Pod "downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026296397s STEP: Saw pod success Feb 3 21:11:46.356: INFO: Pod "downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65" satisfied condition "success or failure" Feb 3 21:11:46.359: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65 container client-container: STEP: delete the pod Feb 3 21:11:46.455: INFO: Waiting for pod downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65 to disappear Feb 3 21:11:46.466: INFO: Pod downwardapi-volume-41bc7c2b-0fe2-4927-a697-0e37c92a4c65 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:46.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-654" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1407,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:46.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:11:50.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7332" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1428,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:11:50.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 21:11:58.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:11:58.664: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:00.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:00.667: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:02.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:02.668: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:04.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:04.667: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:06.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:06.669: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:08.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:08.668: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:10.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:10.669: INFO: Pod pod-with-poststart-exec-hook still exists Feb 3 21:12:12.664: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 3 21:12:12.667: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:12.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3542" for this suite. • [SLOW TEST:22.142 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 3 21:12:21.364: INFO: Successfully updated pod "labelsupdate7425c4de-e734-4727-b0d7-1ba0da435ce3" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:12:25.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3677" for this suite. • [SLOW TEST:12.755 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1451,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:12:25.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6988.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6988.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6988.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6988.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.97.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.97.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.97.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.97.95_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6988.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6988.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6988.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6988.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6988.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6988.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 95.97.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.97.95_udp@PTR;check="$$(dig +tcp +noall +answer +search 95.97.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.97.95_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:12:41.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.737: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.739: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.741: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.754: INFO: Unable to read jessie_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.756: INFO: Unable to read jessie_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.759: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:41.771: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_udp@dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@dns-test-service.dns-6988.svc.cluster.local jessie_tcp@dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:12:46.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.783: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.798: INFO: Unable to read jessie_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.800: INFO: Unable to read jessie_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.802: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.803: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:46.814: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_udp@dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@dns-test-service.dns-6988.svc.cluster.local jessie_tcp@dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:12:51.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.800: INFO: Unable to read jessie_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.802: INFO: Unable to read jessie_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.804: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.805: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:51.816: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_udp@dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@dns-test-service.dns-6988.svc.cluster.local jessie_tcp@dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:12:56.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.786: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.806: INFO: Unable to read jessie_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.811: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.814: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:12:56.834: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_udp@dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@dns-test-service.dns-6988.svc.cluster.local jessie_tcp@dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:13:01.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.788: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.791: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.806: INFO: Unable to read jessie_udp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.808: INFO: Unable to read jessie_tcp@dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.810: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.812: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:01.828: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_udp@dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@dns-test-service.dns-6988.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@dns-test-service.dns-6988.svc.cluster.local jessie_tcp@dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:13:06.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:06.812: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local from pod dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735: the server could not find the requested resource (get pods dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735) Feb 3 21:13:06.835: INFO: Lookups using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6988.svc.cluster.local] Feb 3 21:13:11.855: INFO: DNS probes using dns-6988/dns-test-b68ab2c2-841e-4ca0-8a24-f53a856fb735 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:12.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6988" for this suite. • [SLOW TEST:47.453 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":87,"skipped":1470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:12.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 3 21:13:13.056: INFO: Waiting up to 5m0s for pod "pod-84c04b74-565e-4c54-a8f3-563820105539" in namespace "emptydir-222" to be "success or failure" Feb 3 21:13:13.073: INFO: Pod "pod-84c04b74-565e-4c54-a8f3-563820105539": Phase="Pending", Reason="", readiness=false. Elapsed: 16.411497ms Feb 3 21:13:15.185: INFO: Pod "pod-84c04b74-565e-4c54-a8f3-563820105539": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128471967s Feb 3 21:13:17.269: INFO: Pod "pod-84c04b74-565e-4c54-a8f3-563820105539": Phase="Running", Reason="", readiness=true. Elapsed: 4.212703875s Feb 3 21:13:19.272: INFO: Pod "pod-84c04b74-565e-4c54-a8f3-563820105539": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215382776s STEP: Saw pod success Feb 3 21:13:19.272: INFO: Pod "pod-84c04b74-565e-4c54-a8f3-563820105539" satisfied condition "success or failure" Feb 3 21:13:19.273: INFO: Trying to get logs from node jerma-worker2 pod pod-84c04b74-565e-4c54-a8f3-563820105539 container test-container: STEP: delete the pod Feb 3 21:13:19.289: INFO: Waiting for pod pod-84c04b74-565e-4c54-a8f3-563820105539 to disappear Feb 3 21:13:19.304: INFO: Pod pod-84c04b74-565e-4c54-a8f3-563820105539 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:19.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-222" for this suite. • [SLOW TEST:6.423 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1500,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:19.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:13:19.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8106' Feb 3 21:13:19.750: INFO: stderr: "" Feb 3 21:13:19.750: INFO: stdout: "replicationcontroller/agnhost-master created\n" Feb 3 21:13:19.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8106' Feb 3 21:13:20.040: INFO: stderr: "" Feb 3 21:13:20.040: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 3 21:13:21.042: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:13:21.042: INFO: Found 0 / 1 Feb 3 21:13:22.042: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:13:22.042: INFO: Found 0 / 1 Feb 3 21:13:23.042: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:13:23.042: INFO: Found 1 / 1 Feb 3 21:13:23.043: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 21:13:23.044: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 21:13:23.044: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 21:13:23.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-zc487 --namespace=kubectl-8106' Feb 3 21:13:23.170: INFO: stderr: "" Feb 3 21:13:23.170: INFO: stdout: "Name: agnhost-master-zc487\nNamespace: kubectl-8106\nPriority: 0\nNode: jerma-worker2/172.18.0.5\nStart Time: Wed, 03 Feb 2021 21:13:19 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.250\nIPs:\n IP: 10.244.1.250\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d441977e55f80f684b80a982537fe3a1b62b9fa01a25f121f8e0f8a1e4e36174\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 03 Feb 2021 21:13:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-cjjz5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-cjjz5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-cjjz5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-8106/agnhost-master-zc487 to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" Feb 3 21:13:23.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8106' Feb 3 21:13:23.283: INFO: stderr: "" Feb 3 21:13:23.283: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8106\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-zc487\n" Feb 3 21:13:23.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8106' Feb 3 21:13:23.380: INFO: stderr: "" Feb 3 21:13:23.380: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8106\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.36.184\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.250:6379\nSession Affinity: None\nEvents: \n" Feb 3 21:13:23.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Feb 3 21:13:23.495: INFO: stderr: "" Feb 3 21:13:23.495: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:28:44 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 03 Feb 2021 21:13:15 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 03 Feb 2021 21:08:45 +0000 Sun, 10 Jan 2021 17:28:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 03 Feb 2021 21:08:45 +0000 Sun, 10 Jan 2021 17:28:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 03 Feb 2021 21:08:45 +0000 Sun, 10 Jan 2021 17:28:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 03 Feb 2021 21:08:45 +0000 Sun, 10 Jan 2021 17:30:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: f2a69b341b2c4fbaab31db445a7a4b8d\n System UUID: 9c15b39a-bc1d-4d29-8c92-991cbb313925\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.17.11\n Kube-Proxy Version: v1.17.11\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/jerma/jerma-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-lplmg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system coredns-6955765f44-vdmkg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kindnet-x42bj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 24d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-proxy-8rpbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n local-path-storage local-path-provisioner-5f4b769cdf-5fdg8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 3 21:13:23.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8106' Feb 3 21:13:23.598: INFO: stderr: "" Feb 3 21:13:23.598: INFO: stdout: "Name: kubectl-8106\nLabels: e2e-framework=kubectl\n e2e-run=642bc712-2652-481b-b3e7-e8c70f7d3410\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:23.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8106" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":89,"skipped":1516,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:23.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 21:13:23.700: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 21:13:23.718: INFO: Waiting for terminating namespaces to be deleted... Feb 3 21:13:23.720: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Feb 3 21:13:23.724: INFO: chaos-daemon-f2nl5 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.724: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:13:23.724: INFO: chaos-controller-manager-7f9bbd476f-2hzrh from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.724: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 21:13:23.724: INFO: kindnet-c2jgb from kube-system started at 2021-01-10 17:30:25 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.724: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 21:13:23.724: INFO: kube-proxy-gdgm6 from kube-system started at 2021-01-10 17:29:37 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.724: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:13:23.724: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Feb 3 21:13:23.728: INFO: agnhost-master-zc487 from kubectl-8106 started at 2021-02-03 21:13:19 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.728: INFO: Container agnhost-master ready: true, restart count 0 Feb 3 21:13:23.728: INFO: kindnet-4ww4f from kube-system started at 2021-01-10 17:29:22 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.728: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 21:13:23.728: INFO: chaos-daemon-n2277 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.728: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:13:23.728: INFO: kube-proxy-8vfzd from kube-system started at 2021-01-10 17:29:16 +0000 UTC (1 container statuses recorded) Feb 3 21:13:23.728: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-dfb05ced-d615-4801-99ec-172e5cebe8c9 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-dfb05ced-d615-4801-99ec-172e5cebe8c9 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-dfb05ced-d615-4801-99ec-172e5cebe8c9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:41.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3736" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.271 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":90,"skipped":1536,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:41.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 3 21:13:41.985: INFO: Waiting up to 5m0s for pod "pod-4771b964-00db-48e0-b646-22229b5715a1" in namespace "emptydir-5859" to be "success or failure" Feb 3 21:13:42.006: INFO: Pod "pod-4771b964-00db-48e0-b646-22229b5715a1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.193567ms Feb 3 21:13:44.009: INFO: Pod "pod-4771b964-00db-48e0-b646-22229b5715a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024680892s Feb 3 21:13:46.031: INFO: Pod "pod-4771b964-00db-48e0-b646-22229b5715a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.046720638s Feb 3 21:13:48.089: INFO: Pod "pod-4771b964-00db-48e0-b646-22229b5715a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10470739s STEP: Saw pod success Feb 3 21:13:48.089: INFO: Pod "pod-4771b964-00db-48e0-b646-22229b5715a1" satisfied condition "success or failure" Feb 3 21:13:48.092: INFO: Trying to get logs from node jerma-worker2 pod pod-4771b964-00db-48e0-b646-22229b5715a1 container test-container: STEP: delete the pod Feb 3 21:13:48.172: INFO: Waiting for pod pod-4771b964-00db-48e0-b646-22229b5715a1 to disappear Feb 3 21:13:48.281: INFO: Pod pod-4771b964-00db-48e0-b646-22229b5715a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:48.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5859" for this suite. • [SLOW TEST:6.394 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:48.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Feb 3 21:13:54.542: INFO: Pod pod-hostip-e5a18522-7b9b-4f8b-a518-3ecdc5f8fa4b has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:54.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5207" for this suite. • [SLOW TEST:6.259 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1589,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:54.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:13:54.665: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:13:55.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1328" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":93,"skipped":1594,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:13:55.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-da5d3edd-5627-4504-b5cc-cf15a22ab2f1 STEP: Creating a pod to test consume configMaps Feb 3 21:13:55.815: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9" in namespace "projected-8723" to be "success or failure" Feb 3 21:13:55.822: INFO: Pod "pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617674ms Feb 3 21:13:57.864: INFO: Pod "pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048466793s Feb 3 21:13:59.886: INFO: Pod "pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07103139s STEP: Saw pod success Feb 3 21:13:59.886: INFO: Pod "pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9" satisfied condition "success or failure" Feb 3 21:13:59.889: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:14:00.175: INFO: Waiting for pod pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9 to disappear Feb 3 21:14:00.181: INFO: Pod pod-projected-configmaps-19592179-f4f4-417f-ad98-f6758db1b6f9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:00.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8723" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1605,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:00.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:14:00.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf" in namespace "projected-8519" to be "success or failure" Feb 3 21:14:00.308: INFO: Pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 47.31684ms Feb 3 21:14:02.312: INFO: Pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051149626s Feb 3 21:14:04.316: INFO: Pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055315489s Feb 3 21:14:06.321: INFO: Pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059816703s STEP: Saw pod success Feb 3 21:14:06.321: INFO: Pod "downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf" satisfied condition "success or failure" Feb 3 21:14:06.324: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf container client-container: STEP: delete the pod Feb 3 21:14:06.398: INFO: Waiting for pod downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf to disappear Feb 3 21:14:06.415: INFO: Pod downwardapi-volume-6accfcba-a356-48f5-83fc-ac2295f36fdf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8519" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1610,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:06.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Feb 3 21:14:06.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6429' Feb 3 21:14:06.800: INFO: stderr: "" Feb 3 21:14:06.800: INFO: stdout: "pod/pause created\n" Feb 3 21:14:06.800: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 3 21:14:06.800: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6429" to be "running and ready" Feb 3 21:14:06.805: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.914954ms Feb 3 21:14:08.808: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008148631s Feb 3 21:14:11.099: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.298755684s Feb 3 21:14:11.099: INFO: Pod "pause" satisfied condition "running and ready" Feb 3 21:14:11.099: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Feb 3 21:14:11.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6429' Feb 3 21:14:11.250: INFO: stderr: "" Feb 3 21:14:11.250: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 3 21:14:11.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6429' Feb 3 21:14:11.332: INFO: stderr: "" Feb 3 21:14:11.332: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 3 21:14:11.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6429' Feb 3 21:14:11.415: INFO: stderr: "" Feb 3 21:14:11.415: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 3 21:14:11.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6429' Feb 3 21:14:11.527: INFO: stderr: "" Feb 3 21:14:11.528: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Feb 3 21:14:11.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6429' Feb 3 21:14:11.672: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 21:14:11.672: INFO: stdout: "pod \"pause\" force deleted\n" Feb 3 21:14:11.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6429' Feb 3 21:14:11.788: INFO: stderr: "No resources found in kubectl-6429 namespace.\n" Feb 3 21:14:11.788: INFO: stdout: "" Feb 3 21:14:11.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6429 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 21:14:11.921: INFO: stderr: "" Feb 3 21:14:11.921: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:11.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6429" for this suite. • [SLOW TEST:5.502 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":96,"skipped":1623,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:11.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2407.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2407.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2407.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2407.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2407.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2407.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 21:14:20.380: INFO: DNS probes using dns-2407/dns-test-2af56b28-f725-433e-9f57-12effb00039e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:20.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2407" for this suite. • [SLOW TEST:8.620 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1637,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:20.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Feb 3 21:14:21.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4607' Feb 3 21:14:21.403: INFO: stderr: "" Feb 3 21:14:21.403: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:14:21.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4607' Feb 3 21:14:21.525: INFO: stderr: "" Feb 3 21:14:21.525: INFO: stdout: "update-demo-nautilus-blv2k update-demo-nautilus-c46bd " Feb 3 21:14:21.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blv2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:21.615: INFO: stderr: "" Feb 3 21:14:21.615: INFO: stdout: "" Feb 3 21:14:21.615: INFO: update-demo-nautilus-blv2k is created but not running Feb 3 21:14:26.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4607' Feb 3 21:14:26.714: INFO: stderr: "" Feb 3 21:14:26.714: INFO: stdout: "update-demo-nautilus-blv2k update-demo-nautilus-c46bd " Feb 3 21:14:26.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blv2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:26.819: INFO: stderr: "" Feb 3 21:14:26.819: INFO: stdout: "true" Feb 3 21:14:26.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blv2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:26.912: INFO: stderr: "" Feb 3 21:14:26.913: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:14:26.913: INFO: validating pod update-demo-nautilus-blv2k Feb 3 21:14:26.917: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:14:26.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:14:26.917: INFO: update-demo-nautilus-blv2k is verified up and running Feb 3 21:14:26.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c46bd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:27.014: INFO: stderr: "" Feb 3 21:14:27.014: INFO: stdout: "" Feb 3 21:14:27.014: INFO: update-demo-nautilus-c46bd is created but not running Feb 3 21:14:32.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4607' Feb 3 21:14:32.123: INFO: stderr: "" Feb 3 21:14:32.123: INFO: stdout: "update-demo-nautilus-blv2k update-demo-nautilus-c46bd " Feb 3 21:14:32.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blv2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:32.215: INFO: stderr: "" Feb 3 21:14:32.215: INFO: stdout: "true" Feb 3 21:14:32.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-blv2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:32.320: INFO: stderr: "" Feb 3 21:14:32.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:14:32.320: INFO: validating pod update-demo-nautilus-blv2k Feb 3 21:14:32.324: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:14:32.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:14:32.324: INFO: update-demo-nautilus-blv2k is verified up and running Feb 3 21:14:32.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c46bd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:32.417: INFO: stderr: "" Feb 3 21:14:32.417: INFO: stdout: "true" Feb 3 21:14:32.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c46bd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4607' Feb 3 21:14:32.524: INFO: stderr: "" Feb 3 21:14:32.524: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:14:32.524: INFO: validating pod update-demo-nautilus-c46bd Feb 3 21:14:32.528: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:14:32.528: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:14:32.529: INFO: update-demo-nautilus-c46bd is verified up and running STEP: using delete to clean up resources Feb 3 21:14:32.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4607' Feb 3 21:14:32.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 21:14:32.635: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 3 21:14:32.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4607' Feb 3 21:14:32.726: INFO: stderr: "No resources found in kubectl-4607 namespace.\n" Feb 3 21:14:32.726: INFO: stdout: "" Feb 3 21:14:32.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 21:14:32.819: INFO: stderr: "" Feb 3 21:14:32.819: INFO: stdout: "update-demo-nautilus-blv2k\nupdate-demo-nautilus-c46bd\n" Feb 3 21:14:33.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4607' Feb 3 21:14:33.422: INFO: stderr: "No resources found in kubectl-4607 namespace.\n" Feb 3 21:14:33.422: INFO: stdout: "" Feb 3 21:14:33.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 21:14:33.511: INFO: stderr: "" Feb 3 21:14:33.511: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:33.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4607" for this suite. • [SLOW TEST:12.970 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":98,"skipped":1648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:33.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:14:33.657: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 3 21:14:36.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9409 create -f -' Feb 3 21:14:39.764: INFO: stderr: "" Feb 3 21:14:39.764: INFO: stdout: "e2e-test-crd-publish-openapi-3646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 3 21:14:39.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9409 delete e2e-test-crd-publish-openapi-3646-crds test-cr' Feb 3 21:14:39.864: INFO: stderr: "" Feb 3 21:14:39.864: INFO: stdout: "e2e-test-crd-publish-openapi-3646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 3 21:14:39.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9409 apply -f -' Feb 3 21:14:40.166: INFO: stderr: "" Feb 3 21:14:40.166: INFO: stdout: "e2e-test-crd-publish-openapi-3646-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 3 21:14:40.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9409 delete e2e-test-crd-publish-openapi-3646-crds test-cr' Feb 3 21:14:40.290: INFO: stderr: "" Feb 3 21:14:40.291: INFO: stdout: "e2e-test-crd-publish-openapi-3646-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 3 21:14:40.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3646-crds' Feb 3 21:14:40.537: INFO: stderr: "" Feb 3 21:14:40.537: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3646-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:14:43.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9409" for this suite. • [SLOW TEST:9.941 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":99,"skipped":1671,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:14:43.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-tsxs STEP: Creating a pod to test atomic-volume-subpath Feb 3 21:14:43.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tsxs" in namespace "subpath-9678" to be "success or failure" Feb 3 21:14:43.595: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Pending", Reason="", readiness=false. Elapsed: 21.595138ms Feb 3 21:14:45.665: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091698038s Feb 3 21:14:47.670: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 4.096153539s Feb 3 21:14:49.674: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 6.099968834s Feb 3 21:14:51.678: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 8.104072326s Feb 3 21:14:53.682: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 10.108103686s Feb 3 21:14:55.686: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 12.112477911s Feb 3 21:14:57.690: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 14.116276601s Feb 3 21:14:59.694: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 16.119957093s Feb 3 21:15:01.698: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 18.12420933s Feb 3 21:15:03.701: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 20.127606699s Feb 3 21:15:05.705: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 22.131618659s Feb 3 21:15:07.709: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Running", Reason="", readiness=true. Elapsed: 24.135848503s Feb 3 21:15:09.719: INFO: Pod "pod-subpath-test-configmap-tsxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.145450736s STEP: Saw pod success Feb 3 21:15:09.719: INFO: Pod "pod-subpath-test-configmap-tsxs" satisfied condition "success or failure" Feb 3 21:15:09.723: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-tsxs container test-container-subpath-configmap-tsxs: STEP: delete the pod Feb 3 21:15:09.742: INFO: Waiting for pod pod-subpath-test-configmap-tsxs to disappear Feb 3 21:15:09.746: INFO: Pod pod-subpath-test-configmap-tsxs no longer exists STEP: Deleting pod pod-subpath-test-configmap-tsxs Feb 3 21:15:09.746: INFO: Deleting pod "pod-subpath-test-configmap-tsxs" in namespace "subpath-9678" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:09.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9678" for this suite. • [SLOW TEST:26.293 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":100,"skipped":1682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:09.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:15:09.819: INFO: Creating ReplicaSet my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987 Feb 3 21:15:09.830: INFO: Pod name my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987: Found 0 pods out of 1 Feb 3 21:15:14.834: INFO: Pod name my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987: Found 1 pods out of 1 Feb 3 21:15:14.834: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987" is running Feb 3 21:15:14.836: INFO: Pod "my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987-7lqb5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:15:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:15:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:15:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 21:15:09 +0000 UTC Reason: Message:}]) Feb 3 21:15:14.836: INFO: Trying to dial the pod Feb 3 21:15:19.847: INFO: Controller my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987: Got expected result from replica 1 [my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987-7lqb5]: "my-hostname-basic-2e1f5912-faaf-4a79-9feb-782e3a1cc987-7lqb5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:19.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7096" for this suite. • [SLOW TEST:10.100 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":101,"skipped":1716,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:19.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-n574 STEP: Creating a pod to test atomic-volume-subpath Feb 3 21:15:19.994: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-n574" in namespace "subpath-2467" to be "success or failure" Feb 3 21:15:20.055: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Pending", Reason="", readiness=false. Elapsed: 61.160106ms Feb 3 21:15:22.059: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064834872s Feb 3 21:15:24.063: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 4.068947908s Feb 3 21:15:26.067: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 6.073075821s Feb 3 21:15:28.071: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 8.077484868s Feb 3 21:15:30.076: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 10.081924938s Feb 3 21:15:32.080: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 12.08578647s Feb 3 21:15:34.083: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 14.089201835s Feb 3 21:15:36.087: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 16.093260422s Feb 3 21:15:38.091: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 18.097531914s Feb 3 21:15:40.095: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 20.100951093s Feb 3 21:15:42.098: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Running", Reason="", readiness=true. Elapsed: 22.104561096s Feb 3 21:15:44.102: INFO: Pod "pod-subpath-test-downwardapi-n574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108515736s STEP: Saw pod success Feb 3 21:15:44.102: INFO: Pod "pod-subpath-test-downwardapi-n574" satisfied condition "success or failure" Feb 3 21:15:44.106: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-n574 container test-container-subpath-downwardapi-n574: STEP: delete the pod Feb 3 21:15:44.130: INFO: Waiting for pod pod-subpath-test-downwardapi-n574 to disappear Feb 3 21:15:44.159: INFO: Pod pod-subpath-test-downwardapi-n574 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-n574 Feb 3 21:15:44.159: INFO: Deleting pod "pod-subpath-test-downwardapi-n574" in namespace "subpath-2467" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:44.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2467" for this suite. • [SLOW TEST:24.313 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":102,"skipped":1722,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:44.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Feb 3 21:15:44.504: INFO: Waiting up to 5m0s for pod "var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e" in namespace "var-expansion-9876" to be "success or failure" Feb 3 21:15:44.507: INFO: Pod "var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.74547ms Feb 3 21:15:46.511: INFO: Pod "var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00707s Feb 3 21:15:48.534: INFO: Pod "var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029950808s STEP: Saw pod success Feb 3 21:15:48.534: INFO: Pod "var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e" satisfied condition "success or failure" Feb 3 21:15:48.537: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e container dapi-container: STEP: delete the pod Feb 3 21:15:48.679: INFO: Waiting for pod var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e to disappear Feb 3 21:15:48.687: INFO: Pod var-expansion-1e4245c3-8f77-4bc1-a816-32364c99fd4e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:48.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9876" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:48.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e983222b-966a-4149-adb6-9c0a6418f9ed STEP: Creating a pod to test consume configMaps Feb 3 21:15:48.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1" in namespace "configmap-667" to be "success or failure" Feb 3 21:15:48.819: INFO: Pod "pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.28701ms Feb 3 21:15:50.823: INFO: Pod "pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0073515s Feb 3 21:15:52.829: INFO: Pod "pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013615859s STEP: Saw pod success Feb 3 21:15:52.829: INFO: Pod "pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1" satisfied condition "success or failure" Feb 3 21:15:52.832: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1 container configmap-volume-test: STEP: delete the pod Feb 3 21:15:52.850: INFO: Waiting for pod pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1 to disappear Feb 3 21:15:52.875: INFO: Pod pod-configmaps-48d71cf9-ed54-4de4-b9aa-1181498650e1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:52.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-667" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:52.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:15:52.925: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:15:53.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-868" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":105,"skipped":1831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:15:53.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:15:54.842: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:15:56.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983754, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983754, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983754, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983754, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:15:59.878: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:16:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1300" for this suite. STEP: Destroying namespace "webhook-1300-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.567 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":106,"skipped":1869,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:16:10.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f401a3f0-8b6b-4f5c-8278-e16163385202 STEP: Creating a pod to test consume configMaps Feb 3 21:16:10.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5" in namespace "configmap-8762" to be "success or failure" Feb 3 21:16:10.227: INFO: Pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31321ms Feb 3 21:16:12.231: INFO: Pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007934195s Feb 3 21:16:14.234: INFO: Pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011180611s Feb 3 21:16:16.238: INFO: Pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015311563s STEP: Saw pod success Feb 3 21:16:16.238: INFO: Pod "pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5" satisfied condition "success or failure" Feb 3 21:16:16.241: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5 container configmap-volume-test: STEP: delete the pod Feb 3 21:16:16.256: INFO: Waiting for pod pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5 to disappear Feb 3 21:16:16.261: INFO: Pod pod-configmaps-ce886bd6-3ecd-45c5-a40f-dca8a51f40b5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:16:16.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8762" for this suite. • [SLOW TEST:6.123 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1872,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:16:16.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:16:16.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 3 21:16:16.511: INFO: stderr: "" Feb 3 21:16:16.511: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.16\", GitCommit:\"d88fadbd65c5e8bde22630d251766a634c7613b0\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:15:37Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.11\", GitCommit:\"ea5f00d93211b7c80247bf607cfa422ad6fb5347\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T07:50:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:16:16.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1857" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":108,"skipped":1890,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:16:16.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629 [It] should create a deployment from an image [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:16:16.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7847' Feb 3 21:16:16.740: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 3 21:16:16.740: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 Feb 3 21:16:18.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7847' Feb 3 21:16:18.890: INFO: stderr: "" Feb 3 21:16:18.890: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:16:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7847" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":109,"skipped":1896,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:16:19.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0203 21:17:00.155761 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 21:17:00.155: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:00.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1347" for this suite. • [SLOW TEST:41.094 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":110,"skipped":1910,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:00.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 3 21:17:00.254: INFO: Waiting up to 5m0s for pod "pod-5636a944-5cae-4158-a28b-50a20c11a6bc" in namespace "emptydir-5065" to be "success or failure" Feb 3 21:17:00.265: INFO: Pod "pod-5636a944-5cae-4158-a28b-50a20c11a6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.576072ms Feb 3 21:17:02.269: INFO: Pod "pod-5636a944-5cae-4158-a28b-50a20c11a6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01450937s Feb 3 21:17:04.273: INFO: Pod "pod-5636a944-5cae-4158-a28b-50a20c11a6bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018837686s STEP: Saw pod success Feb 3 21:17:04.273: INFO: Pod "pod-5636a944-5cae-4158-a28b-50a20c11a6bc" satisfied condition "success or failure" Feb 3 21:17:04.276: INFO: Trying to get logs from node jerma-worker pod pod-5636a944-5cae-4158-a28b-50a20c11a6bc container test-container: STEP: delete the pod Feb 3 21:17:04.298: INFO: Waiting for pod pod-5636a944-5cae-4158-a28b-50a20c11a6bc to disappear Feb 3 21:17:04.301: INFO: Pod pod-5636a944-5cae-4158-a28b-50a20c11a6bc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:04.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5065" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:04.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:20.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1193" for this suite. • [SLOW TEST:16.089 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":112,"skipped":1950,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:20.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:17:21.332: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:17:23.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983841, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983841, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983841, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983841, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:17:26.404: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:26.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6064" for this suite. STEP: Destroying namespace "webhook-6064-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":113,"skipped":1957,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:26.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:17:26.732: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43" in namespace "security-context-test-6600" to be "success or failure" Feb 3 21:17:26.751: INFO: Pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43": Phase="Pending", Reason="", readiness=false. Elapsed: 18.389724ms Feb 3 21:17:28.754: INFO: Pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022338749s Feb 3 21:17:30.758: INFO: Pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43": Phase="Running", Reason="", readiness=true. Elapsed: 4.026015704s Feb 3 21:17:32.762: INFO: Pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030306188s Feb 3 21:17:32.762: INFO: Pod "alpine-nnp-false-c26702fa-3111-407c-bc44-3f00dc094a43" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:32.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6600" for this suite. • [SLOW TEST:6.133 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1977,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:32.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 3 21:17:32.828: INFO: Waiting up to 5m0s for pod "downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0" in namespace "downward-api-4714" to be "success or failure" Feb 3 21:17:32.832: INFO: Pod "downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346444ms Feb 3 21:17:34.836: INFO: Pod "downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007873937s Feb 3 21:17:36.840: INFO: Pod "downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012120356s STEP: Saw pod success Feb 3 21:17:36.840: INFO: Pod "downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0" satisfied condition "success or failure" Feb 3 21:17:36.844: INFO: Trying to get logs from node jerma-worker2 pod downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0 container dapi-container: STEP: delete the pod Feb 3 21:17:36.883: INFO: Waiting for pod downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0 to disappear Feb 3 21:17:36.925: INFO: Pod downward-api-67c82666-44b6-4cac-8565-9f26835cbeb0 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:36.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4714" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1978,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:36.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 3 21:17:36.978: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Feb 3 21:17:37.889: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 3 21:17:40.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:17:42.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747983857, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:17:45.016: INFO: Waited 624.74156ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:45.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4923" for this suite. • [SLOW TEST:8.698 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":116,"skipped":1981,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:45.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 21:17:46.129: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 21:17:46.205: INFO: Waiting for terminating namespaces to be deleted... Feb 3 21:17:46.207: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Feb 3 21:17:46.211: INFO: chaos-controller-manager-7f9bbd476f-2hzrh from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.211: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 21:17:46.211: INFO: chaos-daemon-f2nl5 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.211: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:17:46.211: INFO: kindnet-c2jgb from kube-system started at 2021-01-10 17:30:25 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.211: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 21:17:46.211: INFO: kube-proxy-gdgm6 from kube-system started at 2021-01-10 17:29:37 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.211: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:17:46.211: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Feb 3 21:17:46.215: INFO: chaos-daemon-n2277 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.215: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:17:46.215: INFO: kindnet-4ww4f from kube-system started at 2021-01-10 17:29:22 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.215: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 21:17:46.215: INFO: kube-proxy-8vfzd from kube-system started at 2021-01-10 17:29:16 +0000 UTC (1 container statuses recorded) Feb 3 21:17:46.215: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Feb 3 21:17:46.742: INFO: Pod chaos-controller-manager-7f9bbd476f-2hzrh requesting resource cpu=25m on Node jerma-worker Feb 3 21:17:46.742: INFO: Pod chaos-daemon-f2nl5 requesting resource cpu=0m on Node jerma-worker Feb 3 21:17:46.742: INFO: Pod chaos-daemon-n2277 requesting resource cpu=0m on Node jerma-worker2 Feb 3 21:17:46.742: INFO: Pod kindnet-4ww4f requesting resource cpu=100m on Node jerma-worker2 Feb 3 21:17:46.742: INFO: Pod kindnet-c2jgb requesting resource cpu=100m on Node jerma-worker Feb 3 21:17:46.742: INFO: Pod kube-proxy-8vfzd requesting resource cpu=0m on Node jerma-worker2 Feb 3 21:17:46.743: INFO: Pod kube-proxy-gdgm6 requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Feb 3 21:17:46.743: INFO: Creating a pod which consumes cpu=11112m on Node jerma-worker Feb 3 21:17:46.750: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995.1660597e95410f7e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4306/filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995.1660597f0019d1b5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995.1660597f52919f44], Reason = [Created], Message = [Created container filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995] STEP: Considering event: Type = [Normal], Name = [filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995.1660597f62b26a13], Reason = [Started], Message = [Started container filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995] STEP: Considering event: Type = [Normal], Name = [filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74.1660597e94598b6b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4306/filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74.1660597ee67e16e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74.1660597f3f9cd209], Reason = [Created], Message = [Created container filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74] STEP: Considering event: Type = [Normal], Name = [filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74.1660597f573d061e], Reason = [Started], Message = [Started container filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74] STEP: Considering event: Type = [Warning], Name = [additional-pod.1660597ffe82851b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1660597fff504562], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:17:53.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4306" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.337 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":117,"skipped":1985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:17:53.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 3 21:17:54.023: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 21:17:54.064: INFO: Waiting for terminating namespaces to be deleted... Feb 3 21:17:54.066: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Feb 3 21:17:54.073: INFO: kube-proxy-gdgm6 from kube-system started at 2021-01-10 17:29:37 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.073: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:17:54.073: INFO: filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74 from sched-pred-4306 started at 2021-02-03 21:17:46 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.073: INFO: Container filler-pod-c9c23dd2-59d3-41e2-a76d-aaa22f81df74 ready: true, restart count 0 Feb 3 21:17:54.073: INFO: chaos-daemon-f2nl5 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.073: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:17:54.073: INFO: chaos-controller-manager-7f9bbd476f-2hzrh from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.073: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 21:17:54.073: INFO: kindnet-c2jgb from kube-system started at 2021-01-10 17:30:25 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.073: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 21:17:54.073: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Feb 3 21:17:54.079: INFO: filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995 from sched-pred-4306 started at 2021-02-03 21:17:46 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.079: INFO: Container filler-pod-5c89490f-289e-4df7-87ed-81ebed92b995 ready: true, restart count 0 Feb 3 21:17:54.079: INFO: kube-proxy-8vfzd from kube-system started at 2021-01-10 17:29:16 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.079: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 21:17:54.079: INFO: chaos-daemon-n2277 from default started at 2021-01-11 01:07:04 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.079: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 21:17:54.079: INFO: kindnet-4ww4f from kube-system started at 2021-01-10 17:29:22 +0000 UTC (1 container statuses recorded) Feb 3 21:17:54.079: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16605981aee09175], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:01.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2699" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.159 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":118,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:01.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 21:18:01.285: INFO: Waiting up to 5m0s for pod "pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6" in namespace "emptydir-4866" to be "success or failure" Feb 3 21:18:01.296: INFO: Pod "pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.536107ms Feb 3 21:18:03.301: INFO: Pod "pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015807201s Feb 3 21:18:05.305: INFO: Pod "pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01972537s STEP: Saw pod success Feb 3 21:18:05.305: INFO: Pod "pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6" satisfied condition "success or failure" Feb 3 21:18:05.308: INFO: Trying to get logs from node jerma-worker pod pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6 container test-container: STEP: delete the pod Feb 3 21:18:05.436: INFO: Waiting for pod pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6 to disappear Feb 3 21:18:05.492: INFO: Pod pod-4dee32a4-0b7d-4b93-bd1b-f9e6390c55c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:05.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4866" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2041,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:05.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Feb 3 21:18:05.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8279 -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 3 21:18:05.757: INFO: stderr: "" Feb 3 21:18:05.757: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Feb 3 21:18:05.757: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 3 21:18:05.757: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8279" to be "running and ready, or succeeded" Feb 3 21:18:05.766: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201458ms Feb 3 21:18:07.848: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090877303s Feb 3 21:18:09.852: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.095087981s Feb 3 21:18:09.852: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 3 21:18:09.852: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 3 21:18:09.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279' Feb 3 21:18:09.955: INFO: stderr: "" Feb 3 21:18:09.955: INFO: stdout: "I0203 21:18:08.488113 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/rn4 472\nI0203 21:18:08.688239 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/zlz 460\nI0203 21:18:08.888337 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/rwsd 486\nI0203 21:18:09.088347 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/ttlx 375\nI0203 21:18:09.288359 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/2mh 366\nI0203 21:18:09.488276 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/hr8d 356\nI0203 21:18:09.688305 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mct 583\nI0203 21:18:09.888325 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5fw 595\n" STEP: limiting log lines Feb 3 21:18:09.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279 --tail=1' Feb 3 21:18:10.056: INFO: stderr: "" Feb 3 21:18:10.056: INFO: stdout: "I0203 21:18:09.888325 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5fw 595\n" Feb 3 21:18:10.056: INFO: got output "I0203 21:18:09.888325 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5fw 595\n" STEP: limiting log bytes Feb 3 21:18:10.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279 --limit-bytes=1' Feb 3 21:18:10.155: INFO: stderr: "" Feb 3 21:18:10.155: INFO: stdout: "I" Feb 3 21:18:10.155: INFO: got output "I" STEP: exposing timestamps Feb 3 21:18:10.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279 --tail=1 --timestamps' Feb 3 21:18:10.266: INFO: stderr: "" Feb 3 21:18:10.266: INFO: stdout: "2021-02-03T21:18:10.088406037Z I0203 21:18:10.088285 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/glw 263\n" Feb 3 21:18:10.266: INFO: got output "2021-02-03T21:18:10.088406037Z I0203 21:18:10.088285 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/glw 263\n" STEP: restricting to a time range Feb 3 21:18:12.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279 --since=1s' Feb 3 21:18:12.871: INFO: stderr: "" Feb 3 21:18:12.871: INFO: stdout: "I0203 21:18:11.888322 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/ztnl 260\nI0203 21:18:12.088271 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/zk5w 239\nI0203 21:18:12.288353 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/vpk 300\nI0203 21:18:12.488303 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/dn8 218\nI0203 21:18:12.688267 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/kqg 559\n" Feb 3 21:18:12.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8279 --since=24h' Feb 3 21:18:12.972: INFO: stderr: "" Feb 3 21:18:12.972: INFO: stdout: "I0203 21:18:08.488113 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/rn4 472\nI0203 21:18:08.688239 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/zlz 460\nI0203 21:18:08.888337 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/rwsd 486\nI0203 21:18:09.088347 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/ttlx 375\nI0203 21:18:09.288359 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/2mh 366\nI0203 21:18:09.488276 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/hr8d 356\nI0203 21:18:09.688305 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mct 583\nI0203 21:18:09.888325 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/5fw 595\nI0203 21:18:10.088285 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/glw 263\nI0203 21:18:10.288303 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/5g68 490\nI0203 21:18:10.488266 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/zq4c 205\nI0203 21:18:10.688307 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/ptml 558\nI0203 21:18:10.888308 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/6p9h 431\nI0203 21:18:11.088306 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/vf9 326\nI0203 21:18:11.288285 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/fbkl 479\nI0203 21:18:11.488271 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/wlmd 338\nI0203 21:18:11.688293 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/9tr9 339\nI0203 21:18:11.888322 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/ztnl 260\nI0203 21:18:12.088271 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/zk5w 239\nI0203 21:18:12.288353 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/vpk 300\nI0203 21:18:12.488303 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/dn8 218\nI0203 21:18:12.688267 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/kqg 559\nI0203 21:18:12.888301 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/lw5 438\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 3 21:18:12.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8279' Feb 3 21:18:22.116: INFO: stderr: "" Feb 3 21:18:22.116: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:18:22.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8279" for this suite. • [SLOW TEST:16.594 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":120,"skipped":2060,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:18:22.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6940 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Feb 3 21:18:22.225: INFO: Found 0 stateful pods, waiting for 3 Feb 3 21:18:32.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:18:32.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:18:32.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 3 21:18:42.255: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:18:42.255: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:18:42.255: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 3 21:18:42.279: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 3 21:18:52.326: INFO: Updating stateful set ss2 Feb 3 21:18:52.362: INFO: Waiting for Pod statefulset-6940/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 3 21:19:02.526: INFO: Found 2 stateful pods, waiting for 3 Feb 3 21:19:12.531: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:19:12.531: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 3 21:19:12.531: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 3 21:19:12.555: INFO: Updating stateful set ss2 Feb 3 21:19:12.564: INFO: Waiting for Pod statefulset-6940/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 3 21:19:22.590: INFO: Updating stateful set ss2 Feb 3 21:19:22.614: INFO: Waiting for StatefulSet statefulset-6940/ss2 to complete update Feb 3 21:19:22.614: INFO: Waiting for Pod statefulset-6940/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 3 21:19:32.620: INFO: Waiting for StatefulSet statefulset-6940/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 3 21:19:42.622: INFO: Deleting all statefulset in ns statefulset-6940 Feb 3 21:19:42.625: INFO: Scaling statefulset ss2 to 0 Feb 3 21:20:02.645: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 21:20:02.648: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:02.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6940" for this suite. • [SLOW TEST:100.565 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":121,"skipped":2060,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:02.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 21:20:02.729: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:08.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6126" for this suite. • [SLOW TEST:6.144 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":122,"skipped":2075,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:08.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Feb 3 21:20:09.171: INFO: Waiting up to 5m0s for pod "client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64" in namespace "containers-1700" to be "success or failure" Feb 3 21:20:09.329: INFO: Pod "client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64": Phase="Pending", Reason="", readiness=false. Elapsed: 158.163561ms Feb 3 21:20:11.333: INFO: Pod "client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161818701s Feb 3 21:20:13.336: INFO: Pod "client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165410876s STEP: Saw pod success Feb 3 21:20:13.336: INFO: Pod "client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64" satisfied condition "success or failure" Feb 3 21:20:13.339: INFO: Trying to get logs from node jerma-worker2 pod client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64 container test-container: STEP: delete the pod Feb 3 21:20:13.381: INFO: Waiting for pod client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64 to disappear Feb 3 21:20:13.386: INFO: Pod client-containers-c43a2843-59f6-4c91-bdc6-c5360e839a64 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:20:13.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1700" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2077,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:20:13.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3483" for this suite. • [SLOW TEST:60.085 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2078,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:13.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-40acbdca-e3ff-4a2e-b6b8-7a0bc3b854a5 STEP: Creating a pod to test consume secrets Feb 3 21:21:13.539: INFO: Waiting up to 5m0s for pod "pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11" in namespace "secrets-4471" to be "success or failure" Feb 3 21:21:13.581: INFO: Pod "pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11": Phase="Pending", Reason="", readiness=false. Elapsed: 41.880234ms Feb 3 21:21:15.820: INFO: Pod "pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280789939s Feb 3 21:21:17.823: INFO: Pod "pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284197431s STEP: Saw pod success Feb 3 21:21:17.823: INFO: Pod "pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11" satisfied condition "success or failure" Feb 3 21:21:17.825: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11 container secret-volume-test: STEP: delete the pod Feb 3 21:21:17.879: INFO: Waiting for pod pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11 to disappear Feb 3 21:21:17.885: INFO: Pod pod-secrets-ecdeb4b5-d399-4996-b633-ca298cc6df11 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:17.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4471" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2085,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:17.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7676/secret-test-dc1faccd-7326-4e08-a8e1-35bf7b640b31 STEP: Creating a pod to test consume secrets Feb 3 21:21:17.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5" in namespace "secrets-7676" to be "success or failure" Feb 3 21:21:17.982: INFO: Pod "pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.464734ms Feb 3 21:21:20.062: INFO: Pod "pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100484117s Feb 3 21:21:22.066: INFO: Pod "pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104459412s STEP: Saw pod success Feb 3 21:21:22.066: INFO: Pod "pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5" satisfied condition "success or failure" Feb 3 21:21:22.069: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5 container env-test: STEP: delete the pod Feb 3 21:21:22.104: INFO: Waiting for pod pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5 to disappear Feb 3 21:21:22.109: INFO: Pod pod-configmaps-f74a5554-9b8d-4fdc-aa24-83e88662e0a5 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7676" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:22.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 3 21:21:22.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91" in namespace "projected-7721" to be "success or failure" Feb 3 21:21:22.206: INFO: Pod "downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91": Phase="Pending", Reason="", readiness=false. Elapsed: 11.605623ms Feb 3 21:21:24.251: INFO: Pod "downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057241855s Feb 3 21:21:26.255: INFO: Pod "downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060966272s STEP: Saw pod success Feb 3 21:21:26.255: INFO: Pod "downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91" satisfied condition "success or failure" Feb 3 21:21:26.257: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91 container client-container: STEP: delete the pod Feb 3 21:21:26.296: INFO: Waiting for pod downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91 to disappear Feb 3 21:21:26.335: INFO: Pod downwardapi-volume-b6a725fc-e0f0-4ac9-94cf-82bfd9086c91 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:26.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7721" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2140,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:26.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-24d3081b-2fd4-416d-9380-042ca6ebda6c STEP: Creating a pod to test consume secrets Feb 3 21:21:26.424: INFO: Waiting up to 5m0s for pod "pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e" in namespace "secrets-8516" to be "success or failure" Feb 3 21:21:26.428: INFO: Pod "pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167307ms Feb 3 21:21:28.431: INFO: Pod "pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006880683s Feb 3 21:21:30.435: INFO: Pod "pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010849859s STEP: Saw pod success Feb 3 21:21:30.435: INFO: Pod "pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e" satisfied condition "success or failure" Feb 3 21:21:30.438: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e container secret-volume-test: STEP: delete the pod Feb 3 21:21:30.458: INFO: Waiting for pod pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e to disappear Feb 3 21:21:30.463: INFO: Pod pod-secrets-a7a00218-98f8-4831-98b6-b8742bbe5b9e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:30.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8516" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2142,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:30.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6e5555f2-b96e-4088-b87f-c3ab54cf5af0 STEP: Creating a pod to test consume configMaps Feb 3 21:21:30.631: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5" in namespace "projected-7900" to be "success or failure" Feb 3 21:21:30.639: INFO: Pod "pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332675ms Feb 3 21:21:32.643: INFO: Pod "pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012105154s Feb 3 21:21:34.647: INFO: Pod "pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016254986s STEP: Saw pod success Feb 3 21:21:34.647: INFO: Pod "pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5" satisfied condition "success or failure" Feb 3 21:21:34.651: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5 container projected-configmap-volume-test: STEP: delete the pod Feb 3 21:21:34.671: INFO: Waiting for pod pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5 to disappear Feb 3 21:21:34.675: INFO: Pod pod-projected-configmaps-b75617ab-5091-4cdf-9571-c17affc16fb5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:21:34.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7900" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2154,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:21:34.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-45cb11c8-4443-4aee-87a3-a31a502e37d6 in namespace container-probe-3818 Feb 3 21:21:38.837: INFO: Started pod test-webserver-45cb11c8-4443-4aee-87a3-a31a502e37d6 in namespace container-probe-3818 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 21:21:38.839: INFO: Initial restart count of pod test-webserver-45cb11c8-4443-4aee-87a3-a31a502e37d6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:25:39.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3818" for this suite. • [SLOW TEST:244.804 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2174,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:25:39.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 3 21:25:39.581: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 3 21:25:44.583: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:25:44.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7555" for this suite. • [SLOW TEST:5.224 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":131,"skipped":2187,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:25:44.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:25:44.829: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6846 I0203 21:25:44.852785 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6846, replica count: 1 I0203 21:25:45.903321 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:25:46.903579 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:25:47.903851 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:25:48.904103 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 21:25:49.904376 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 21:25:50.268: INFO: Created: latency-svc-v8fbs Feb 3 21:25:50.310: INFO: Got endpoints: latency-svc-v8fbs [306.046369ms] Feb 3 21:25:50.591: INFO: Created: latency-svc-7tr4r Feb 3 21:25:50.629: INFO: Created: latency-svc-vzp4f Feb 3 21:25:50.629: INFO: Got endpoints: latency-svc-7tr4r [318.325167ms] Feb 3 21:25:50.644: INFO: Got endpoints: latency-svc-vzp4f [333.628128ms] Feb 3 21:25:50.666: INFO: Created: latency-svc-h9pk6 Feb 3 21:25:50.690: INFO: Got endpoints: latency-svc-h9pk6 [379.53178ms] Feb 3 21:25:50.770: INFO: Created: latency-svc-zg2ph Feb 3 21:25:50.829: INFO: Got endpoints: latency-svc-zg2ph [518.832313ms] Feb 3 21:25:51.004: INFO: Created: latency-svc-znhcx Feb 3 21:25:51.039: INFO: Got endpoints: latency-svc-znhcx [728.323794ms] Feb 3 21:25:51.178: INFO: Created: latency-svc-7ghdb Feb 3 21:25:51.261: INFO: Got endpoints: latency-svc-7ghdb [950.16007ms] Feb 3 21:25:51.261: INFO: Created: latency-svc-zx4cf Feb 3 21:25:51.340: INFO: Got endpoints: latency-svc-zx4cf [1.029164133s] Feb 3 21:25:51.361: INFO: Created: latency-svc-wt7d5 Feb 3 21:25:51.374: INFO: Got endpoints: latency-svc-wt7d5 [1.063353004s] Feb 3 21:25:51.420: INFO: Created: latency-svc-v9v52 Feb 3 21:25:51.495: INFO: Got endpoints: latency-svc-v9v52 [1.184723724s] Feb 3 21:25:51.497: INFO: Created: latency-svc-nrh62 Feb 3 21:25:51.505: INFO: Got endpoints: latency-svc-nrh62 [1.194773339s] Feb 3 21:25:51.549: INFO: Created: latency-svc-htt57 Feb 3 21:25:51.583: INFO: Got endpoints: latency-svc-htt57 [1.271950378s] Feb 3 21:25:51.639: INFO: Created: latency-svc-2fjz8 Feb 3 21:25:51.661: INFO: Got endpoints: latency-svc-2fjz8 [1.350280012s] Feb 3 21:25:51.662: INFO: Created: latency-svc-6fq8h Feb 3 21:25:51.673: INFO: Got endpoints: latency-svc-6fq8h [1.362994994s] Feb 3 21:25:51.700: INFO: Created: latency-svc-gh4j7 Feb 3 21:25:51.710: INFO: Got endpoints: latency-svc-gh4j7 [1.399190259s] Feb 3 21:25:51.729: INFO: Created: latency-svc-sprwx Feb 3 21:25:51.783: INFO: Got endpoints: latency-svc-sprwx [1.47248282s] Feb 3 21:25:51.792: INFO: Created: latency-svc-f8f7g Feb 3 21:25:51.830: INFO: Got endpoints: latency-svc-f8f7g [1.201111817s] Feb 3 21:25:51.867: INFO: Created: latency-svc-wq97c Feb 3 21:25:51.933: INFO: Got endpoints: latency-svc-wq97c [1.289043897s] Feb 3 21:25:51.935: INFO: Created: latency-svc-nxhhw Feb 3 21:25:51.948: INFO: Got endpoints: latency-svc-nxhhw [1.258292799s] Feb 3 21:25:51.985: INFO: Created: latency-svc-kxpbn Feb 3 21:25:52.002: INFO: Got endpoints: latency-svc-kxpbn [1.173025798s] Feb 3 21:25:52.021: INFO: Created: latency-svc-rtchs Feb 3 21:25:52.052: INFO: Got endpoints: latency-svc-rtchs [1.012925834s] Feb 3 21:25:52.064: INFO: Created: latency-svc-cv6rw Feb 3 21:25:52.084: INFO: Got endpoints: latency-svc-cv6rw [823.43315ms] Feb 3 21:25:52.098: INFO: Created: latency-svc-gzhf2 Feb 3 21:25:52.135: INFO: Got endpoints: latency-svc-gzhf2 [795.840325ms] Feb 3 21:25:52.190: INFO: Created: latency-svc-gsznm Feb 3 21:25:52.209: INFO: Got endpoints: latency-svc-gsznm [835.004296ms] Feb 3 21:25:52.209: INFO: Created: latency-svc-bb2lp Feb 3 21:25:52.225: INFO: Got endpoints: latency-svc-bb2lp [729.488085ms] Feb 3 21:25:52.245: INFO: Created: latency-svc-qz5vp Feb 3 21:25:52.255: INFO: Got endpoints: latency-svc-qz5vp [749.655404ms] Feb 3 21:25:52.273: INFO: Created: latency-svc-tkmf6 Feb 3 21:25:52.285: INFO: Got endpoints: latency-svc-tkmf6 [702.238879ms] Feb 3 21:25:52.334: INFO: Created: latency-svc-74bkh Feb 3 21:25:52.351: INFO: Got endpoints: latency-svc-74bkh [689.675331ms] Feb 3 21:25:52.351: INFO: Created: latency-svc-rv7vb Feb 3 21:25:52.377: INFO: Got endpoints: latency-svc-rv7vb [703.10489ms] Feb 3 21:25:52.407: INFO: Created: latency-svc-4skkd Feb 3 21:25:52.417: INFO: Got endpoints: latency-svc-4skkd [707.037754ms] Feb 3 21:25:52.477: INFO: Created: latency-svc-m9pmx Feb 3 21:25:52.501: INFO: Got endpoints: latency-svc-m9pmx [717.550474ms] Feb 3 21:25:52.501: INFO: Created: latency-svc-pklr2 Feb 3 21:25:52.518: INFO: Got endpoints: latency-svc-pklr2 [687.833638ms] Feb 3 21:25:52.530: INFO: Created: latency-svc-k7qg5 Feb 3 21:25:52.549: INFO: Got endpoints: latency-svc-k7qg5 [615.535956ms] Feb 3 21:25:52.634: INFO: Created: latency-svc-wvnc8 Feb 3 21:25:52.665: INFO: Created: latency-svc-hsjn9 Feb 3 21:25:52.665: INFO: Got endpoints: latency-svc-wvnc8 [717.051101ms] Feb 3 21:25:52.680: INFO: Got endpoints: latency-svc-hsjn9 [677.657474ms] Feb 3 21:25:52.701: INFO: Created: latency-svc-kjhsd Feb 3 21:25:52.716: INFO: Got endpoints: latency-svc-kjhsd [664.11094ms] Feb 3 21:25:52.771: INFO: Created: latency-svc-mch9n Feb 3 21:25:52.801: INFO: Got endpoints: latency-svc-mch9n [716.861984ms] Feb 3 21:25:52.801: INFO: Created: latency-svc-f9ck2 Feb 3 21:25:52.837: INFO: Got endpoints: latency-svc-f9ck2 [701.679761ms] Feb 3 21:25:52.909: INFO: Created: latency-svc-ssp2r Feb 3 21:25:52.928: INFO: Got endpoints: latency-svc-ssp2r [719.512498ms] Feb 3 21:25:52.929: INFO: Created: latency-svc-bzv5g Feb 3 21:25:52.938: INFO: Got endpoints: latency-svc-bzv5g [713.32186ms] Feb 3 21:25:52.952: INFO: Created: latency-svc-8tz97 Feb 3 21:25:52.962: INFO: Got endpoints: latency-svc-8tz97 [706.694357ms] Feb 3 21:25:52.987: INFO: Created: latency-svc-gkkk5 Feb 3 21:25:53.004: INFO: Got endpoints: latency-svc-gkkk5 [719.128162ms] Feb 3 21:25:53.046: INFO: Created: latency-svc-c4w5c Feb 3 21:25:53.089: INFO: Got endpoints: latency-svc-c4w5c [738.596309ms] Feb 3 21:25:53.121: INFO: Created: latency-svc-s2g6h Feb 3 21:25:53.136: INFO: Got endpoints: latency-svc-s2g6h [759.310319ms] Feb 3 21:25:53.178: INFO: Created: latency-svc-2zf48 Feb 3 21:25:53.235: INFO: Created: latency-svc-797f5 Feb 3 21:25:53.235: INFO: Got endpoints: latency-svc-2zf48 [818.291726ms] Feb 3 21:25:53.267: INFO: Got endpoints: latency-svc-797f5 [766.312045ms] Feb 3 21:25:53.328: INFO: Created: latency-svc-j2rn9 Feb 3 21:25:53.367: INFO: Created: latency-svc-c4jwc Feb 3 21:25:53.367: INFO: Got endpoints: latency-svc-j2rn9 [849.034849ms] Feb 3 21:25:53.386: INFO: Got endpoints: latency-svc-c4jwc [837.699884ms] Feb 3 21:25:53.406: INFO: Created: latency-svc-qbwz4 Feb 3 21:25:53.422: INFO: Got endpoints: latency-svc-qbwz4 [757.213295ms] Feb 3 21:25:53.486: INFO: Created: latency-svc-f9d92 Feb 3 21:25:53.494: INFO: Got endpoints: latency-svc-f9d92 [814.21576ms] Feb 3 21:25:53.514: INFO: Created: latency-svc-2qq5w Feb 3 21:25:53.530: INFO: Got endpoints: latency-svc-2qq5w [814.377094ms] Feb 3 21:25:53.565: INFO: Created: latency-svc-k9cdn Feb 3 21:25:53.633: INFO: Got endpoints: latency-svc-k9cdn [831.829966ms] Feb 3 21:25:53.664: INFO: Created: latency-svc-75tlk Feb 3 21:25:53.681: INFO: Got endpoints: latency-svc-75tlk [844.14776ms] Feb 3 21:25:53.700: INFO: Created: latency-svc-z6kph Feb 3 21:25:53.717: INFO: Got endpoints: latency-svc-z6kph [788.236487ms] Feb 3 21:25:53.771: INFO: Created: latency-svc-d7m2z Feb 3 21:25:53.804: INFO: Got endpoints: latency-svc-d7m2z [866.030294ms] Feb 3 21:25:53.805: INFO: Created: latency-svc-49l8b Feb 3 21:25:53.819: INFO: Got endpoints: latency-svc-49l8b [856.660995ms] Feb 3 21:25:53.835: INFO: Created: latency-svc-srh84 Feb 3 21:25:53.865: INFO: Got endpoints: latency-svc-srh84 [860.378743ms] Feb 3 21:25:53.916: INFO: Created: latency-svc-bnhj8 Feb 3 21:25:53.933: INFO: Got endpoints: latency-svc-bnhj8 [843.439431ms] Feb 3 21:25:53.958: INFO: Created: latency-svc-cb59m Feb 3 21:25:53.974: INFO: Got endpoints: latency-svc-cb59m [838.455351ms] Feb 3 21:25:53.996: INFO: Created: latency-svc-xzn7r Feb 3 21:25:54.028: INFO: Got endpoints: latency-svc-xzn7r [793.027316ms] Feb 3 21:25:54.050: INFO: Created: latency-svc-xww8r Feb 3 21:25:54.063: INFO: Got endpoints: latency-svc-xww8r [796.140293ms] Feb 3 21:25:54.080: INFO: Created: latency-svc-8qjff Feb 3 21:25:54.094: INFO: Got endpoints: latency-svc-8qjff [726.872255ms] Feb 3 21:25:54.126: INFO: Created: latency-svc-bbr9z Feb 3 21:25:54.160: INFO: Got endpoints: latency-svc-bbr9z [773.534737ms] Feb 3 21:25:54.168: INFO: Created: latency-svc-dks5v Feb 3 21:25:54.183: INFO: Got endpoints: latency-svc-dks5v [760.564797ms] Feb 3 21:25:54.216: INFO: Created: latency-svc-gl9jq Feb 3 21:25:54.231: INFO: Got endpoints: latency-svc-gl9jq [736.884373ms] Feb 3 21:25:54.248: INFO: Created: latency-svc-44pcp Feb 3 21:25:54.279: INFO: Got endpoints: latency-svc-44pcp [748.772294ms] Feb 3 21:25:54.296: INFO: Created: latency-svc-dcrtc Feb 3 21:25:54.309: INFO: Got endpoints: latency-svc-dcrtc [676.285186ms] Feb 3 21:25:54.328: INFO: Created: latency-svc-6mmrz Feb 3 21:25:54.340: INFO: Got endpoints: latency-svc-6mmrz [658.155488ms] Feb 3 21:25:54.360: INFO: Created: latency-svc-xmgxz Feb 3 21:25:54.376: INFO: Got endpoints: latency-svc-xmgxz [658.980069ms] Feb 3 21:25:54.418: INFO: Created: latency-svc-5rzs4 Feb 3 21:25:54.438: INFO: Created: latency-svc-76xct Feb 3 21:25:54.438: INFO: Got endpoints: latency-svc-5rzs4 [633.907354ms] Feb 3 21:25:54.466: INFO: Got endpoints: latency-svc-76xct [647.011143ms] Feb 3 21:25:54.494: INFO: Created: latency-svc-x5s94 Feb 3 21:25:54.537: INFO: Got endpoints: latency-svc-x5s94 [672.2755ms] Feb 3 21:25:54.548: INFO: Created: latency-svc-lpsww Feb 3 21:25:54.561: INFO: Got endpoints: latency-svc-lpsww [628.416808ms] Feb 3 21:25:54.622: INFO: Created: latency-svc-mqvz5 Feb 3 21:25:54.693: INFO: Got endpoints: latency-svc-mqvz5 [718.744748ms] Feb 3 21:25:54.722: INFO: Created: latency-svc-xsg8g Feb 3 21:25:54.734: INFO: Got endpoints: latency-svc-xsg8g [706.155054ms] Feb 3 21:25:54.752: INFO: Created: latency-svc-js4qz Feb 3 21:25:54.766: INFO: Got endpoints: latency-svc-js4qz [702.970578ms] Feb 3 21:25:54.782: INFO: Created: latency-svc-gkzb2 Feb 3 21:25:54.812: INFO: Got endpoints: latency-svc-gkzb2 [718.417697ms] Feb 3 21:25:54.852: INFO: Created: latency-svc-dsfq8 Feb 3 21:25:54.884: INFO: Got endpoints: latency-svc-dsfq8 [724.445459ms] Feb 3 21:25:54.906: INFO: Created: latency-svc-k6gw9 Feb 3 21:25:54.950: INFO: Got endpoints: latency-svc-k6gw9 [767.185409ms] Feb 3 21:25:54.969: INFO: Created: latency-svc-mhjfd Feb 3 21:25:54.998: INFO: Got endpoints: latency-svc-mhjfd [767.031986ms] Feb 3 21:25:55.101: INFO: Created: latency-svc-9jzkc Feb 3 21:25:55.128: INFO: Got endpoints: latency-svc-9jzkc [848.473922ms] Feb 3 21:25:55.128: INFO: Created: latency-svc-nlszl Feb 3 21:25:55.142: INFO: Got endpoints: latency-svc-nlszl [832.770294ms] Feb 3 21:25:55.164: INFO: Created: latency-svc-d89h9 Feb 3 21:25:55.179: INFO: Got endpoints: latency-svc-d89h9 [839.037353ms] Feb 3 21:25:55.262: INFO: Created: latency-svc-qmhcb Feb 3 21:25:55.292: INFO: Created: latency-svc-xj4nm Feb 3 21:25:55.292: INFO: Got endpoints: latency-svc-qmhcb [916.317522ms] Feb 3 21:25:55.344: INFO: Got endpoints: latency-svc-xj4nm [906.046078ms] Feb 3 21:25:55.418: INFO: Created: latency-svc-2dpmn Feb 3 21:25:55.442: INFO: Got endpoints: latency-svc-2dpmn [976.144895ms] Feb 3 21:25:55.442: INFO: Created: latency-svc-7zqwc Feb 3 21:25:55.454: INFO: Got endpoints: latency-svc-7zqwc [916.856671ms] Feb 3 21:25:55.478: INFO: Created: latency-svc-vbwkc Feb 3 21:25:55.500: INFO: Got endpoints: latency-svc-vbwkc [938.636228ms] Feb 3 21:25:55.549: INFO: Created: latency-svc-h2vvs Feb 3 21:25:55.567: INFO: Got endpoints: latency-svc-h2vvs [873.629965ms] Feb 3 21:25:55.598: INFO: Created: latency-svc-l79xx Feb 3 21:25:55.615: INFO: Got endpoints: latency-svc-l79xx [880.468388ms] Feb 3 21:25:55.699: INFO: Created: latency-svc-tzt8n Feb 3 21:25:55.712: INFO: Got endpoints: latency-svc-tzt8n [945.617175ms] Feb 3 21:25:55.742: INFO: Created: latency-svc-xdldq Feb 3 21:25:55.752: INFO: Got endpoints: latency-svc-xdldq [940.054284ms] Feb 3 21:25:55.770: INFO: Created: latency-svc-vq9zh Feb 3 21:25:55.784: INFO: Got endpoints: latency-svc-vq9zh [899.036595ms] Feb 3 21:25:55.836: INFO: Created: latency-svc-qrlmj Feb 3 21:25:55.844: INFO: Got endpoints: latency-svc-qrlmj [893.309991ms] Feb 3 21:25:55.874: INFO: Created: latency-svc-4cznp Feb 3 21:25:55.890: INFO: Got endpoints: latency-svc-4cznp [891.837299ms] Feb 3 21:25:55.922: INFO: Created: latency-svc-8t5h9 Feb 3 21:25:55.933: INFO: Got endpoints: latency-svc-8t5h9 [805.784319ms] Feb 3 21:25:55.975: INFO: Created: latency-svc-z86vg Feb 3 21:25:55.982: INFO: Got endpoints: latency-svc-z86vg [839.371528ms] Feb 3 21:25:56.022: INFO: Created: latency-svc-9zw4h Feb 3 21:25:56.035: INFO: Got endpoints: latency-svc-9zw4h [856.737299ms] Feb 3 21:25:56.052: INFO: Created: latency-svc-h6n4k Feb 3 21:25:56.066: INFO: Got endpoints: latency-svc-h6n4k [773.321334ms] Feb 3 21:25:56.100: INFO: Created: latency-svc-cg4sc Feb 3 21:25:56.126: INFO: Created: latency-svc-jn6m4 Feb 3 21:25:56.126: INFO: Got endpoints: latency-svc-cg4sc [781.449454ms] Feb 3 21:25:56.168: INFO: Got endpoints: latency-svc-jn6m4 [725.79018ms] Feb 3 21:25:56.192: INFO: Created: latency-svc-6cxm7 Feb 3 21:25:56.231: INFO: Got endpoints: latency-svc-6cxm7 [777.506908ms] Feb 3 21:25:56.243: INFO: Created: latency-svc-mfp9r Feb 3 21:25:56.256: INFO: Got endpoints: latency-svc-mfp9r [755.829264ms] Feb 3 21:25:56.273: INFO: Created: latency-svc-l28nl Feb 3 21:25:56.286: INFO: Got endpoints: latency-svc-l28nl [718.810868ms] Feb 3 21:25:56.305: INFO: Created: latency-svc-szbs8 Feb 3 21:25:56.316: INFO: Got endpoints: latency-svc-szbs8 [700.734431ms] Feb 3 21:25:56.363: INFO: Created: latency-svc-sqx9k Feb 3 21:25:56.376: INFO: Got endpoints: latency-svc-sqx9k [664.04798ms] Feb 3 21:25:56.395: INFO: Created: latency-svc-hz8vp Feb 3 21:25:56.412: INFO: Got endpoints: latency-svc-hz8vp [659.329854ms] Feb 3 21:25:56.431: INFO: Created: latency-svc-dtn29 Feb 3 21:25:56.462: INFO: Got endpoints: latency-svc-dtn29 [678.073487ms] Feb 3 21:25:56.514: INFO: Created: latency-svc-xvg6d Feb 3 21:25:56.520: INFO: Got endpoints: latency-svc-xvg6d [675.830619ms] Feb 3 21:25:56.556: INFO: Created: latency-svc-f2xmf Feb 3 21:25:56.569: INFO: Got endpoints: latency-svc-f2xmf [678.416649ms] Feb 3 21:25:56.610: INFO: Created: latency-svc-9fx4n Feb 3 21:25:56.645: INFO: Got endpoints: latency-svc-9fx4n [711.202115ms] Feb 3 21:25:56.658: INFO: Created: latency-svc-n7szs Feb 3 21:25:56.707: INFO: Got endpoints: latency-svc-n7szs [725.30542ms] Feb 3 21:25:56.772: INFO: Created: latency-svc-mlsn5 Feb 3 21:25:56.802: INFO: Got endpoints: latency-svc-mlsn5 [766.977174ms] Feb 3 21:25:56.834: INFO: Created: latency-svc-qgknk Feb 3 21:25:56.850: INFO: Got endpoints: latency-svc-qgknk [784.272569ms] Feb 3 21:25:56.945: INFO: Created: latency-svc-dgklp Feb 3 21:25:56.964: INFO: Got endpoints: latency-svc-dgklp [837.505565ms] Feb 3 21:25:56.964: INFO: Created: latency-svc-j4gr7 Feb 3 21:25:57.000: INFO: Got endpoints: latency-svc-j4gr7 [832.026891ms] Feb 3 21:25:57.039: INFO: Created: latency-svc-c4hpv Feb 3 21:25:57.088: INFO: Got endpoints: latency-svc-c4hpv [856.23085ms] Feb 3 21:25:57.109: INFO: Created: latency-svc-qlfqh Feb 3 21:25:57.146: INFO: Got endpoints: latency-svc-qlfqh [890.441662ms] Feb 3 21:25:57.176: INFO: Created: latency-svc-mjszw Feb 3 21:25:57.214: INFO: Got endpoints: latency-svc-mjszw [928.520758ms] Feb 3 21:25:57.248: INFO: Created: latency-svc-pdw9w Feb 3 21:25:57.268: INFO: Got endpoints: latency-svc-pdw9w [952.57356ms] Feb 3 21:25:57.306: INFO: Created: latency-svc-4hcc9 Feb 3 21:25:57.345: INFO: Got endpoints: latency-svc-4hcc9 [969.318419ms] Feb 3 21:25:57.348: INFO: Created: latency-svc-w6k78 Feb 3 21:25:57.364: INFO: Got endpoints: latency-svc-w6k78 [952.319311ms] Feb 3 21:25:57.414: INFO: Created: latency-svc-dvz7j Feb 3 21:25:57.519: INFO: Got endpoints: latency-svc-dvz7j [1.057725276s] Feb 3 21:25:57.522: INFO: Created: latency-svc-62mpg Feb 3 21:25:57.533: INFO: Got endpoints: latency-svc-62mpg [1.013029386s] Feb 3 21:25:57.560: INFO: Created: latency-svc-gggnw Feb 3 21:25:57.569: INFO: Got endpoints: latency-svc-gggnw [1.000155435s] Feb 3 21:25:57.600: INFO: Created: latency-svc-x92k6 Feb 3 21:25:57.639: INFO: Got endpoints: latency-svc-x92k6 [994.121323ms] Feb 3 21:25:57.654: INFO: Created: latency-svc-rsw4z Feb 3 21:25:57.671: INFO: Got endpoints: latency-svc-rsw4z [964.366665ms] Feb 3 21:25:57.698: INFO: Created: latency-svc-g5tsp Feb 3 21:25:57.713: INFO: Got endpoints: latency-svc-g5tsp [910.439353ms] Feb 3 21:25:57.727: INFO: Created: latency-svc-jrv68 Feb 3 21:25:57.813: INFO: Got endpoints: latency-svc-jrv68 [963.407646ms] Feb 3 21:25:57.817: INFO: Created: latency-svc-sg6vm Feb 3 21:25:57.820: INFO: Got endpoints: latency-svc-sg6vm [856.621734ms] Feb 3 21:25:57.865: INFO: Created: latency-svc-mmq8g Feb 3 21:25:57.879: INFO: Got endpoints: latency-svc-mmq8g [879.584966ms] Feb 3 21:25:57.894: INFO: Created: latency-svc-g4c6r Feb 3 21:25:57.903: INFO: Got endpoints: latency-svc-g4c6r [815.402844ms] Feb 3 21:25:57.963: INFO: Created: latency-svc-mtgbp Feb 3 21:25:58.010: INFO: Got endpoints: latency-svc-mtgbp [863.907746ms] Feb 3 21:25:58.011: INFO: Created: latency-svc-2z5v8 Feb 3 21:25:58.035: INFO: Got endpoints: latency-svc-2z5v8 [820.868895ms] Feb 3 21:25:58.112: INFO: Created: latency-svc-hgmcl Feb 3 21:25:58.140: INFO: Created: latency-svc-r5vwd Feb 3 21:25:58.140: INFO: Got endpoints: latency-svc-hgmcl [871.37211ms] Feb 3 21:25:58.155: INFO: Got endpoints: latency-svc-r5vwd [809.737539ms] Feb 3 21:25:58.182: INFO: Created: latency-svc-kqvsv Feb 3 21:25:58.191: INFO: Got endpoints: latency-svc-kqvsv [826.890615ms] Feb 3 21:25:58.244: INFO: Created: latency-svc-8bxzg Feb 3 21:25:58.268: INFO: Got endpoints: latency-svc-8bxzg [748.131794ms] Feb 3 21:25:58.269: INFO: Created: latency-svc-thzn7 Feb 3 21:25:58.288: INFO: Got endpoints: latency-svc-thzn7 [754.821788ms] Feb 3 21:25:58.324: INFO: Created: latency-svc-zcgll Feb 3 21:25:58.411: INFO: Got endpoints: latency-svc-zcgll [842.352559ms] Feb 3 21:25:58.424: INFO: Created: latency-svc-5jgpq Feb 3 21:25:58.438: INFO: Got endpoints: latency-svc-5jgpq [798.725987ms] Feb 3 21:25:58.472: INFO: Created: latency-svc-qklbf Feb 3 21:25:58.486: INFO: Got endpoints: latency-svc-qklbf [814.178691ms] Feb 3 21:25:58.506: INFO: Created: latency-svc-9mhln Feb 3 21:25:58.537: INFO: Got endpoints: latency-svc-9mhln [823.963114ms] Feb 3 21:25:58.547: INFO: Created: latency-svc-ngg49 Feb 3 21:25:58.578: INFO: Got endpoints: latency-svc-ngg49 [764.871701ms] Feb 3 21:25:58.596: INFO: Created: latency-svc-9jkbh Feb 3 21:25:58.604: INFO: Got endpoints: latency-svc-9jkbh [783.657168ms] Feb 3 21:25:58.622: INFO: Created: latency-svc-68kj8 Feb 3 21:25:58.634: INFO: Got endpoints: latency-svc-68kj8 [754.880144ms] Feb 3 21:25:58.681: INFO: Created: latency-svc-tx975 Feb 3 21:25:58.712: INFO: Got endpoints: latency-svc-tx975 [808.6647ms] Feb 3 21:25:58.740: INFO: Created: latency-svc-q4bdp Feb 3 21:25:58.754: INFO: Got endpoints: latency-svc-q4bdp [743.910644ms] Feb 3 21:25:58.770: INFO: Created: latency-svc-f269q Feb 3 21:25:58.806: INFO: Got endpoints: latency-svc-f269q [770.628588ms] Feb 3 21:25:58.811: INFO: Created: latency-svc-bqn8q Feb 3 21:25:58.826: INFO: Got endpoints: latency-svc-bqn8q [686.1619ms] Feb 3 21:25:58.850: INFO: Created: latency-svc-clndv Feb 3 21:25:58.863: INFO: Got endpoints: latency-svc-clndv [707.610912ms] Feb 3 21:25:58.879: INFO: Created: latency-svc-kg7gr Feb 3 21:25:58.893: INFO: Got endpoints: latency-svc-kg7gr [702.014462ms] Feb 3 21:25:58.944: INFO: Created: latency-svc-4kdwb Feb 3 21:25:58.974: INFO: Created: latency-svc-s9htj Feb 3 21:25:58.974: INFO: Got endpoints: latency-svc-4kdwb [706.566699ms] Feb 3 21:25:59.004: INFO: Got endpoints: latency-svc-s9htj [716.020723ms] Feb 3 21:25:59.034: INFO: Created: latency-svc-rdlpd Feb 3 21:25:59.100: INFO: Got endpoints: latency-svc-rdlpd [688.723602ms] Feb 3 21:25:59.102: INFO: Created: latency-svc-jh88b Feb 3 21:25:59.109: INFO: Got endpoints: latency-svc-jh88b [671.102121ms] Feb 3 21:25:59.125: INFO: Created: latency-svc-j9xqz Feb 3 21:25:59.139: INFO: Got endpoints: latency-svc-j9xqz [653.201453ms] Feb 3 21:25:59.156: INFO: Created: latency-svc-n6qjl Feb 3 21:25:59.173: INFO: Got endpoints: latency-svc-n6qjl [636.16533ms] Feb 3 21:25:59.196: INFO: Created: latency-svc-d2sbr Feb 3 21:25:59.250: INFO: Got endpoints: latency-svc-d2sbr [671.352152ms] Feb 3 21:25:59.251: INFO: Created: latency-svc-d4d4b Feb 3 21:25:59.270: INFO: Got endpoints: latency-svc-d4d4b [665.619916ms] Feb 3 21:25:59.336: INFO: Created: latency-svc-28zfr Feb 3 21:25:59.406: INFO: Got endpoints: latency-svc-28zfr [771.450174ms] Feb 3 21:25:59.407: INFO: Created: latency-svc-n2sck Feb 3 21:25:59.419: INFO: Got endpoints: latency-svc-n2sck [707.125766ms] Feb 3 21:25:59.449: INFO: Created: latency-svc-r5v8q Feb 3 21:25:59.492: INFO: Got endpoints: latency-svc-r5v8q [737.145608ms] Feb 3 21:25:59.557: INFO: Created: latency-svc-kzz4p Feb 3 21:25:59.563: INFO: Got endpoints: latency-svc-kzz4p [756.890503ms] Feb 3 21:25:59.604: INFO: Created: latency-svc-pkth8 Feb 3 21:25:59.639: INFO: Got endpoints: latency-svc-pkth8 [812.715387ms] Feb 3 21:25:59.711: INFO: Created: latency-svc-shllt Feb 3 21:25:59.743: INFO: Got endpoints: latency-svc-shllt [880.44784ms] Feb 3 21:25:59.744: INFO: Created: latency-svc-6q46h Feb 3 21:25:59.768: INFO: Got endpoints: latency-svc-6q46h [874.615001ms] Feb 3 21:25:59.792: INFO: Created: latency-svc-65dvk Feb 3 21:25:59.891: INFO: Got endpoints: latency-svc-65dvk [916.393225ms] Feb 3 21:25:59.893: INFO: Created: latency-svc-6pl8p Feb 3 21:25:59.899: INFO: Got endpoints: latency-svc-6pl8p [895.404109ms] Feb 3 21:25:59.923: INFO: Created: latency-svc-s6kht Feb 3 21:25:59.948: INFO: Got endpoints: latency-svc-s6kht [847.426737ms] Feb 3 21:25:59.972: INFO: Created: latency-svc-df8bd Feb 3 21:26:00.028: INFO: Got endpoints: latency-svc-df8bd [919.554359ms] Feb 3 21:26:00.061: INFO: Created: latency-svc-bq2rz Feb 3 21:26:00.079: INFO: Got endpoints: latency-svc-bq2rz [939.652726ms] Feb 3 21:26:00.107: INFO: Created: latency-svc-w7x2d Feb 3 21:26:00.120: INFO: Got endpoints: latency-svc-w7x2d [947.194004ms] Feb 3 21:26:00.172: INFO: Created: latency-svc-8mp77 Feb 3 21:26:00.180: INFO: Got endpoints: latency-svc-8mp77 [930.226386ms] Feb 3 21:26:00.218: INFO: Created: latency-svc-8fbbx Feb 3 21:26:00.234: INFO: Got endpoints: latency-svc-8fbbx [964.304612ms] Feb 3 21:26:00.254: INFO: Created: latency-svc-xsf22 Feb 3 21:26:00.270: INFO: Got endpoints: latency-svc-xsf22 [864.15076ms] Feb 3 21:26:00.310: INFO: Created: latency-svc-n8xmp Feb 3 21:26:00.318: INFO: Got endpoints: latency-svc-n8xmp [898.776677ms] Feb 3 21:26:00.335: INFO: Created: latency-svc-5mbfg Feb 3 21:26:00.348: INFO: Got endpoints: latency-svc-5mbfg [855.944228ms] Feb 3 21:26:00.365: INFO: Created: latency-svc-fgdhj Feb 3 21:26:00.378: INFO: Got endpoints: latency-svc-fgdhj [815.535364ms] Feb 3 21:26:00.397: INFO: Created: latency-svc-gql4w Feb 3 21:26:00.447: INFO: Got endpoints: latency-svc-gql4w [808.332714ms] Feb 3 21:26:00.449: INFO: Created: latency-svc-hkqns Feb 3 21:26:00.469: INFO: Created: latency-svc-865gv Feb 3 21:26:00.469: INFO: Got endpoints: latency-svc-hkqns [725.924283ms] Feb 3 21:26:00.481: INFO: Got endpoints: latency-svc-865gv [712.913008ms] Feb 3 21:26:00.497: INFO: Created: latency-svc-d68lr Feb 3 21:26:00.533: INFO: Got endpoints: latency-svc-d68lr [642.443151ms] Feb 3 21:26:00.580: INFO: Created: latency-svc-rf8fc Feb 3 21:26:00.595: INFO: Created: latency-svc-x5gxz Feb 3 21:26:00.595: INFO: Got endpoints: latency-svc-rf8fc [696.284591ms] Feb 3 21:26:00.612: INFO: Got endpoints: latency-svc-x5gxz [664.62733ms] Feb 3 21:26:00.631: INFO: Created: latency-svc-5gt6v Feb 3 21:26:00.661: INFO: Got endpoints: latency-svc-5gt6v [632.793853ms] Feb 3 21:26:00.711: INFO: Created: latency-svc-8zkng Feb 3 21:26:00.719: INFO: Got endpoints: latency-svc-8zkng [640.831339ms] Feb 3 21:26:00.755: INFO: Created: latency-svc-5tvls Feb 3 21:26:00.767: INFO: Got endpoints: latency-svc-5tvls [646.469184ms] Feb 3 21:26:00.791: INFO: Created: latency-svc-q62m5 Feb 3 21:26:00.803: INFO: Got endpoints: latency-svc-q62m5 [622.839777ms] Feb 3 21:26:00.843: INFO: Created: latency-svc-5v8h9 Feb 3 21:26:00.871: INFO: Got endpoints: latency-svc-5v8h9 [637.256787ms] Feb 3 21:26:00.872: INFO: Created: latency-svc-htztv Feb 3 21:26:00.887: INFO: Got endpoints: latency-svc-htztv [616.819225ms] Feb 3 21:26:00.913: INFO: Created: latency-svc-2bkqd Feb 3 21:26:00.930: INFO: Got endpoints: latency-svc-2bkqd [611.578372ms] Feb 3 21:26:00.975: INFO: Created: latency-svc-vf49w Feb 3 21:26:00.995: INFO: Got endpoints: latency-svc-vf49w [647.358726ms] Feb 3 21:26:00.996: INFO: Created: latency-svc-f24dc Feb 3 21:26:01.008: INFO: Got endpoints: latency-svc-f24dc [629.515374ms] Feb 3 21:26:01.025: INFO: Created: latency-svc-4wdrc Feb 3 21:26:01.038: INFO: Got endpoints: latency-svc-4wdrc [590.655386ms] Feb 3 21:26:01.062: INFO: Created: latency-svc-5drmx Feb 3 21:26:01.106: INFO: Got endpoints: latency-svc-5drmx [636.596091ms] Feb 3 21:26:01.129: INFO: Created: latency-svc-fnhls Feb 3 21:26:01.140: INFO: Got endpoints: latency-svc-fnhls [658.904398ms] Feb 3 21:26:01.171: INFO: Created: latency-svc-ctcp7 Feb 3 21:26:01.182: INFO: Got endpoints: latency-svc-ctcp7 [648.70872ms] Feb 3 21:26:01.199: INFO: Created: latency-svc-nf7wm Feb 3 21:26:01.238: INFO: Got endpoints: latency-svc-nf7wm [642.253774ms] Feb 3 21:26:01.241: INFO: Created: latency-svc-psd7c Feb 3 21:26:01.258: INFO: Got endpoints: latency-svc-psd7c [645.996711ms] Feb 3 21:26:01.290: INFO: Created: latency-svc-t9qw6 Feb 3 21:26:01.307: INFO: Got endpoints: latency-svc-t9qw6 [645.278399ms] Feb 3 21:26:01.307: INFO: Latencies: [318.325167ms 333.628128ms 379.53178ms 518.832313ms 590.655386ms 611.578372ms 615.535956ms 616.819225ms 622.839777ms 628.416808ms 629.515374ms 632.793853ms 633.907354ms 636.16533ms 636.596091ms 637.256787ms 640.831339ms 642.253774ms 642.443151ms 645.278399ms 645.996711ms 646.469184ms 647.011143ms 647.358726ms 648.70872ms 653.201453ms 658.155488ms 658.904398ms 658.980069ms 659.329854ms 664.04798ms 664.11094ms 664.62733ms 665.619916ms 671.102121ms 671.352152ms 672.2755ms 675.830619ms 676.285186ms 677.657474ms 678.073487ms 678.416649ms 686.1619ms 687.833638ms 688.723602ms 689.675331ms 696.284591ms 700.734431ms 701.679761ms 702.014462ms 702.238879ms 702.970578ms 703.10489ms 706.155054ms 706.566699ms 706.694357ms 707.037754ms 707.125766ms 707.610912ms 711.202115ms 712.913008ms 713.32186ms 716.020723ms 716.861984ms 717.051101ms 717.550474ms 718.417697ms 718.744748ms 718.810868ms 719.128162ms 719.512498ms 724.445459ms 725.30542ms 725.79018ms 725.924283ms 726.872255ms 728.323794ms 729.488085ms 736.884373ms 737.145608ms 738.596309ms 743.910644ms 748.131794ms 748.772294ms 749.655404ms 754.821788ms 754.880144ms 755.829264ms 756.890503ms 757.213295ms 759.310319ms 760.564797ms 764.871701ms 766.312045ms 766.977174ms 767.031986ms 767.185409ms 770.628588ms 771.450174ms 773.321334ms 773.534737ms 777.506908ms 781.449454ms 783.657168ms 784.272569ms 788.236487ms 793.027316ms 795.840325ms 796.140293ms 798.725987ms 805.784319ms 808.332714ms 808.6647ms 809.737539ms 812.715387ms 814.178691ms 814.21576ms 814.377094ms 815.402844ms 815.535364ms 818.291726ms 820.868895ms 823.43315ms 823.963114ms 826.890615ms 831.829966ms 832.026891ms 832.770294ms 835.004296ms 837.505565ms 837.699884ms 838.455351ms 839.037353ms 839.371528ms 842.352559ms 843.439431ms 844.14776ms 847.426737ms 848.473922ms 849.034849ms 855.944228ms 856.23085ms 856.621734ms 856.660995ms 856.737299ms 860.378743ms 863.907746ms 864.15076ms 866.030294ms 871.37211ms 873.629965ms 874.615001ms 879.584966ms 880.44784ms 880.468388ms 890.441662ms 891.837299ms 893.309991ms 895.404109ms 898.776677ms 899.036595ms 906.046078ms 910.439353ms 916.317522ms 916.393225ms 916.856671ms 919.554359ms 928.520758ms 930.226386ms 938.636228ms 939.652726ms 940.054284ms 945.617175ms 947.194004ms 950.16007ms 952.319311ms 952.57356ms 963.407646ms 964.304612ms 964.366665ms 969.318419ms 976.144895ms 994.121323ms 1.000155435s 1.012925834s 1.013029386s 1.029164133s 1.057725276s 1.063353004s 1.173025798s 1.184723724s 1.194773339s 1.201111817s 1.258292799s 1.271950378s 1.289043897s 1.350280012s 1.362994994s 1.399190259s 1.47248282s] Feb 3 21:26:01.307: INFO: 50 %ile: 773.534737ms Feb 3 21:26:01.307: INFO: 90 %ile: 969.318419ms Feb 3 21:26:01.307: INFO: 99 %ile: 1.399190259s Feb 3 21:26:01.307: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:01.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6846" for this suite. • [SLOW TEST:16.613 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":132,"skipped":2187,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:01.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:12.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7164" for this suite. • [SLOW TEST:11.380 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":133,"skipped":2202,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:12.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:29.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6979" for this suite. • [SLOW TEST:16.526 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":134,"skipped":2203,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:29.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-68ca047f-6c1e-48b3-a2de-3c2de6bd13a3 STEP: Creating a pod to test consume secrets Feb 3 21:26:29.358: INFO: Waiting up to 5m0s for pod "pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e" in namespace "secrets-3258" to be "success or failure" Feb 3 21:26:29.361: INFO: Pod "pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174007ms Feb 3 21:26:31.366: INFO: Pod "pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909131s Feb 3 21:26:33.370: INFO: Pod "pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012178805s STEP: Saw pod success Feb 3 21:26:33.370: INFO: Pod "pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e" satisfied condition "success or failure" Feb 3 21:26:33.373: INFO: Trying to get logs from node jerma-worker pod pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e container secret-volume-test: STEP: delete the pod Feb 3 21:26:33.409: INFO: Waiting for pod pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e to disappear Feb 3 21:26:33.441: INFO: Pod pod-secrets-77dcfb6b-1f92-4bd3-924f-3166d727f59e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:33.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3258" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2209,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:33.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 21:26:37.532: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:37.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8106" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2224,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:37.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:26:53.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8559" for this suite. • [SLOW TEST:16.301 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":137,"skipped":2236,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:26:53.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Feb 3 21:26:54.043: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:10.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7475" for this suite. • [SLOW TEST:16.880 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":138,"skipped":2238,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:10.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 3 21:27:10.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7918' Feb 3 21:27:13.738: INFO: stderr: "" Feb 3 21:27:13.738: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765 Feb 3 21:27:13.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7918' Feb 3 21:27:21.310: INFO: stderr: "" Feb 3 21:27:21.310: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:21.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7918" for this suite. • [SLOW TEST:10.513 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":139,"skipped":2245,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:21.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ch9s STEP: Creating a pod to test atomic-volume-subpath Feb 3 21:27:21.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ch9s" in namespace "subpath-2484" to be "success or failure" Feb 3 21:27:21.416: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984731ms Feb 3 21:27:23.421: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007304784s Feb 3 21:27:25.425: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 4.011540814s Feb 3 21:27:27.429: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 6.015282192s Feb 3 21:27:29.432: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 8.018826446s Feb 3 21:27:31.436: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 10.022970268s Feb 3 21:27:33.442: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 12.029142865s Feb 3 21:27:35.446: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 14.033100531s Feb 3 21:27:37.460: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 16.047044513s Feb 3 21:27:39.465: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 18.052130056s Feb 3 21:27:41.470: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 20.056477426s Feb 3 21:27:43.474: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Running", Reason="", readiness=true. Elapsed: 22.060819226s Feb 3 21:27:45.478: INFO: Pod "pod-subpath-test-configmap-ch9s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064717649s STEP: Saw pod success Feb 3 21:27:45.478: INFO: Pod "pod-subpath-test-configmap-ch9s" satisfied condition "success or failure" Feb 3 21:27:45.481: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-ch9s container test-container-subpath-configmap-ch9s: STEP: delete the pod Feb 3 21:27:45.525: INFO: Waiting for pod pod-subpath-test-configmap-ch9s to disappear Feb 3 21:27:45.554: INFO: Pod pod-subpath-test-configmap-ch9s no longer exists STEP: Deleting pod pod-subpath-test-configmap-ch9s Feb 3 21:27:45.554: INFO: Deleting pod "pod-subpath-test-configmap-ch9s" in namespace "subpath-2484" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:45.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2484" for this suite. • [SLOW TEST:24.249 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":140,"skipped":2258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:45.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create a job from an image, then delete the job [Deprecated] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Feb 3 21:27:45.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9376 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 3 21:27:48.334: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0203 21:27:48.263066 1609 log.go:172] (0xc000100370) (0xc00065da40) Create stream\nI0203 21:27:48.263117 1609 log.go:172] (0xc000100370) (0xc00065da40) Stream added, broadcasting: 1\nI0203 21:27:48.265773 1609 log.go:172] (0xc000100370) Reply frame received for 1\nI0203 21:27:48.265809 1609 log.go:172] (0xc000100370) (0xc00065dae0) Create stream\nI0203 21:27:48.265820 1609 log.go:172] (0xc000100370) (0xc00065dae0) Stream added, broadcasting: 3\nI0203 21:27:48.266651 1609 log.go:172] (0xc000100370) Reply frame received for 3\nI0203 21:27:48.266687 1609 log.go:172] (0xc000100370) (0xc000b2e0a0) Create stream\nI0203 21:27:48.266696 1609 log.go:172] (0xc000100370) (0xc000b2e0a0) Stream added, broadcasting: 5\nI0203 21:27:48.267495 1609 log.go:172] (0xc000100370) Reply frame received for 5\nI0203 21:27:48.267531 1609 log.go:172] (0xc000100370) (0xc0006a0000) Create stream\nI0203 21:27:48.267541 1609 log.go:172] (0xc000100370) (0xc0006a0000) Stream added, broadcasting: 7\nI0203 21:27:48.268497 1609 log.go:172] (0xc000100370) Reply frame received for 7\nI0203 21:27:48.268742 1609 log.go:172] (0xc00065dae0) (3) Writing data frame\nI0203 21:27:48.268954 1609 log.go:172] (0xc00065dae0) (3) Writing data frame\nI0203 21:27:48.269749 1609 log.go:172] (0xc000100370) Data frame received for 5\nI0203 21:27:48.269771 1609 log.go:172] (0xc000b2e0a0) (5) Data frame handling\nI0203 21:27:48.269787 1609 log.go:172] (0xc000b2e0a0) (5) Data frame sent\nI0203 21:27:48.270330 1609 log.go:172] (0xc000100370) Data frame received for 5\nI0203 21:27:48.270345 1609 log.go:172] (0xc000b2e0a0) (5) Data frame handling\nI0203 21:27:48.270359 1609 log.go:172] (0xc000b2e0a0) (5) Data frame sent\nI0203 21:27:48.310160 1609 log.go:172] (0xc000100370) Data frame received for 5\nI0203 21:27:48.310182 1609 log.go:172] (0xc000b2e0a0) (5) Data frame handling\nI0203 21:27:48.310231 1609 log.go:172] (0xc000100370) Data frame received for 7\nI0203 21:27:48.310278 1609 log.go:172] (0xc0006a0000) (7) Data frame handling\nI0203 21:27:48.310474 1609 log.go:172] (0xc000100370) Data frame received for 1\nI0203 21:27:48.310496 1609 log.go:172] (0xc00065da40) (1) Data frame handling\nI0203 21:27:48.310512 1609 log.go:172] (0xc00065da40) (1) Data frame sent\nI0203 21:27:48.310662 1609 log.go:172] (0xc000100370) (0xc00065dae0) Stream removed, broadcasting: 3\nI0203 21:27:48.310711 1609 log.go:172] (0xc000100370) (0xc00065da40) Stream removed, broadcasting: 1\nI0203 21:27:48.310855 1609 log.go:172] (0xc000100370) Go away received\nI0203 21:27:48.311163 1609 log.go:172] (0xc000100370) (0xc00065da40) Stream removed, broadcasting: 1\nI0203 21:27:48.311193 1609 log.go:172] (0xc000100370) (0xc00065dae0) Stream removed, broadcasting: 3\nI0203 21:27:48.311210 1609 log.go:172] (0xc000100370) (0xc000b2e0a0) Stream removed, broadcasting: 5\nI0203 21:27:48.311226 1609 log.go:172] (0xc000100370) (0xc0006a0000) Stream removed, broadcasting: 7\n" Feb 3 21:27:48.334: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:27:50.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9376" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":141,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:27:50.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 3 21:27:50.423: INFO: PodSpec: initContainers in spec.initContainers Feb 3 21:28:36.714: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f58ff8b5-73d9-4ac9-8209-e9357291d1b2", GenerateName:"", Namespace:"init-container-3606", SelfLink:"/api/v1/namespaces/init-container-3606/pods/pod-init-f58ff8b5-73d9-4ac9-8209-e9357291d1b2", UID:"fc0de2e9-7849-4d4c-99a5-a079fcdfcd0e", ResourceVersion:"6388991", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63747984470, loc:(*time.Location)(0x791c680)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"423547217"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cvl7s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006b5ef80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvl7s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvl7s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cvl7s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004755a68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002904780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004755b00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004755b40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004755b48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004755b4c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984470, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984470, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984470, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984470, loc:(*time.Location)(0x791c680)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.46", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.46"}}, StartTime:(*v1.Time)(0xc0055a6000), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0055a6040), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009d2d20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c43aadc709a5bfa011b814652b9d13ce53c81d11472aa69040aa90d13a2dc9ab", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0055a6060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0055a6020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004755bdf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:28:36.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3606" for this suite. • [SLOW TEST:46.380 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":142,"skipped":2403,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:28:36.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should do a rolling update of a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Feb 3 21:28:37.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7800' Feb 3 21:28:37.361: INFO: stderr: "" Feb 3 21:28:37.361: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:28:37.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7800' Feb 3 21:28:37.459: INFO: stderr: "" Feb 3 21:28:37.459: INFO: stdout: "update-demo-nautilus-849bn update-demo-nautilus-9g9fh " Feb 3 21:28:37.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-849bn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:28:37.549: INFO: stderr: "" Feb 3 21:28:37.549: INFO: stdout: "" Feb 3 21:28:37.549: INFO: update-demo-nautilus-849bn is created but not running Feb 3 21:28:42.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7800' Feb 3 21:28:42.676: INFO: stderr: "" Feb 3 21:28:42.676: INFO: stdout: "update-demo-nautilus-849bn update-demo-nautilus-9g9fh " Feb 3 21:28:42.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-849bn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:28:42.755: INFO: stderr: "" Feb 3 21:28:42.755: INFO: stdout: "true" Feb 3 21:28:42.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-849bn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:28:42.854: INFO: stderr: "" Feb 3 21:28:42.854: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:28:42.854: INFO: validating pod update-demo-nautilus-849bn Feb 3 21:28:42.858: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:28:42.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:28:42.858: INFO: update-demo-nautilus-849bn is verified up and running Feb 3 21:28:42.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9g9fh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:28:42.949: INFO: stderr: "" Feb 3 21:28:42.949: INFO: stdout: "true" Feb 3 21:28:42.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9g9fh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:28:43.052: INFO: stderr: "" Feb 3 21:28:43.052: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 21:28:43.052: INFO: validating pod update-demo-nautilus-9g9fh Feb 3 21:28:43.056: INFO: got data: { "image": "nautilus.jpg" } Feb 3 21:28:43.056: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 21:28:43.056: INFO: update-demo-nautilus-9g9fh is verified up and running STEP: rolling-update to new replication controller Feb 3 21:28:43.059: INFO: scanned /root for discovery docs: Feb 3 21:28:43.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7800' Feb 3 21:29:05.679: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 3 21:29:05.679: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 21:29:05.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7800' Feb 3 21:29:05.790: INFO: stderr: "" Feb 3 21:29:05.790: INFO: stdout: "update-demo-kitten-5bz6w update-demo-kitten-765xb " Feb 3 21:29:05.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5bz6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:29:05.913: INFO: stderr: "" Feb 3 21:29:05.913: INFO: stdout: "true" Feb 3 21:29:05.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5bz6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:29:06.019: INFO: stderr: "" Feb 3 21:29:06.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 21:29:06.019: INFO: validating pod update-demo-kitten-5bz6w Feb 3 21:29:06.025: INFO: got data: { "image": "kitten.jpg" } Feb 3 21:29:06.025: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 21:29:06.025: INFO: update-demo-kitten-5bz6w is verified up and running Feb 3 21:29:06.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-765xb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:29:06.128: INFO: stderr: "" Feb 3 21:29:06.128: INFO: stdout: "true" Feb 3 21:29:06.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-765xb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7800' Feb 3 21:29:06.231: INFO: stderr: "" Feb 3 21:29:06.231: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 3 21:29:06.231: INFO: validating pod update-demo-kitten-765xb Feb 3 21:29:06.235: INFO: got data: { "image": "kitten.jpg" } Feb 3 21:29:06.235: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 3 21:29:06.235: INFO: update-demo-kitten-765xb is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:06.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7800" for this suite. • [SLOW TEST:29.515 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should do a rolling update of a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":143,"skipped":2416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:06.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:29:06.795: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:29:08.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 21:29:10.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984546, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:29:13.851: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:29:13.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:15.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7172" for this suite. STEP: Destroying namespace "webhook-7172-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.026 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":144,"skipped":2452,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:15.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 3 21:29:15.414: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5690 /api/v1/namespaces/watch-5690/configmaps/e2e-watch-test-resource-version a07d6fc2-49ab-48d5-8745-76dc8f366992 6389309 0 2021-02-03 21:29:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 3 21:29:15.415: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5690 /api/v1/namespaces/watch-5690/configmaps/e2e-watch-test-resource-version a07d6fc2-49ab-48d5-8745-76dc8f366992 6389310 0 2021-02-03 21:29:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:15.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5690" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":145,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:15.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 3 21:29:15.905: INFO: Waiting up to 5m0s for pod "pod-917610ff-8c02-4787-a88a-4387e144b079" in namespace "emptydir-477" to be "success or failure" Feb 3 21:29:15.939: INFO: Pod "pod-917610ff-8c02-4787-a88a-4387e144b079": Phase="Pending", Reason="", readiness=false. Elapsed: 34.456041ms Feb 3 21:29:17.995: INFO: Pod "pod-917610ff-8c02-4787-a88a-4387e144b079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090349783s Feb 3 21:29:20.000: INFO: Pod "pod-917610ff-8c02-4787-a88a-4387e144b079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09495237s STEP: Saw pod success Feb 3 21:29:20.000: INFO: Pod "pod-917610ff-8c02-4787-a88a-4387e144b079" satisfied condition "success or failure" Feb 3 21:29:20.003: INFO: Trying to get logs from node jerma-worker2 pod pod-917610ff-8c02-4787-a88a-4387e144b079 container test-container: STEP: delete the pod Feb 3 21:29:20.055: INFO: Waiting for pod pod-917610ff-8c02-4787-a88a-4387e144b079 to disappear Feb 3 21:29:20.071: INFO: Pod pod-917610ff-8c02-4787-a88a-4387e144b079 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:20.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-477" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2517,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:20.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 21:29:21.114: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 21:29:23.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984561, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984561, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984561, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984561, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 21:29:26.260: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5915" for this suite. STEP: Destroying namespace "webhook-5915-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.336 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":147,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:26.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 3 21:29:32.617: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8092 PodName:pod-sharedvolume-dfcb57f9-7d8f-4331-82b7-e41aff879d6e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 21:29:32.617: INFO: >>> kubeConfig: /root/.kube/config I0203 21:29:32.678701 6 log.go:172] (0xc002d5ab00) (0xc0011a6dc0) Create stream I0203 21:29:32.678729 6 log.go:172] (0xc002d5ab00) (0xc0011a6dc0) Stream added, broadcasting: 1 I0203 21:29:32.680179 6 log.go:172] (0xc002d5ab00) Reply frame received for 1 I0203 21:29:32.680202 6 log.go:172] (0xc002d5ab00) (0xc0011a6e60) Create stream I0203 21:29:32.680210 6 log.go:172] (0xc002d5ab00) (0xc0011a6e60) Stream added, broadcasting: 3 I0203 21:29:32.680758 6 log.go:172] (0xc002d5ab00) Reply frame received for 3 I0203 21:29:32.680780 6 log.go:172] (0xc002d5ab00) (0xc000f78aa0) Create stream I0203 21:29:32.680788 6 log.go:172] (0xc002d5ab00) (0xc000f78aa0) Stream added, broadcasting: 5 I0203 21:29:32.681521 6 log.go:172] (0xc002d5ab00) Reply frame received for 5 I0203 21:29:32.736125 6 log.go:172] (0xc002d5ab00) Data frame received for 5 I0203 21:29:32.736159 6 log.go:172] (0xc000f78aa0) (5) Data frame handling I0203 21:29:32.736177 6 log.go:172] (0xc002d5ab00) Data frame received for 3 I0203 21:29:32.736186 6 log.go:172] (0xc0011a6e60) (3) Data frame handling I0203 21:29:32.736195 6 log.go:172] (0xc0011a6e60) (3) Data frame sent I0203 21:29:32.736203 6 log.go:172] (0xc002d5ab00) Data frame received for 3 I0203 21:29:32.736210 6 log.go:172] (0xc0011a6e60) (3) Data frame handling I0203 21:29:32.738250 6 log.go:172] (0xc002d5ab00) Data frame received for 1 I0203 21:29:32.738274 6 log.go:172] (0xc0011a6dc0) (1) Data frame handling I0203 21:29:32.738294 6 log.go:172] (0xc0011a6dc0) (1) Data frame sent I0203 21:29:32.738317 6 log.go:172] (0xc002d5ab00) (0xc0011a6dc0) Stream removed, broadcasting: 1 I0203 21:29:32.738387 6 log.go:172] (0xc002d5ab00) (0xc0011a6dc0) Stream removed, broadcasting: 1 I0203 21:29:32.738456 6 log.go:172] (0xc002d5ab00) (0xc0011a6e60) Stream removed, broadcasting: 3 I0203 21:29:32.738488 6 log.go:172] (0xc002d5ab00) (0xc000f78aa0) Stream removed, broadcasting: 5 Feb 3 21:29:32.738: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 3 21:29:32.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0203 21:29:32.738643 6 log.go:172] (0xc002d5ab00) Go away received STEP: Destroying namespace "emptydir-8092" for this suite. • [SLOW TEST:6.325 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":148,"skipped":2542,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 3 21:29:32.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 3 21:29:32.838: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 21:29:32.990: INFO: Waiting up to 5m0s for pod "downward-api-3687db78-57e0-4de4-8894-b7c96291b140" in namespace "downward-api-7485" to be "success or failure"
Feb  3 21:29:33.004: INFO: Pod "downward-api-3687db78-57e0-4de4-8894-b7c96291b140": Phase="Pending", Reason="", readiness=false. Elapsed: 14.283619ms
Feb  3 21:29:35.061: INFO: Pod "downward-api-3687db78-57e0-4de4-8894-b7c96291b140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071149457s
Feb  3 21:29:37.065: INFO: Pod "downward-api-3687db78-57e0-4de4-8894-b7c96291b140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07541925s
STEP: Saw pod success
Feb  3 21:29:37.065: INFO: Pod "downward-api-3687db78-57e0-4de4-8894-b7c96291b140" satisfied condition "success or failure"
Feb  3 21:29:37.068: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3687db78-57e0-4de4-8894-b7c96291b140 container dapi-container: 
STEP: delete the pod
Feb  3 21:29:37.096: INFO: Waiting for pod downward-api-3687db78-57e0-4de4-8894-b7c96291b140 to disappear
Feb  3 21:29:37.126: INFO: Pod downward-api-3687db78-57e0-4de4-8894-b7c96291b140 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:29:37.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7485" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:29:37.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 21:29:37.224: INFO: Waiting up to 5m0s for pod "pod-1ad25829-a32a-4450-8198-f8c489d2c862" in namespace "emptydir-6255" to be "success or failure"
Feb  3 21:29:37.252: INFO: Pod "pod-1ad25829-a32a-4450-8198-f8c489d2c862": Phase="Pending", Reason="", readiness=false. Elapsed: 27.88484ms
Feb  3 21:29:39.255: INFO: Pod "pod-1ad25829-a32a-4450-8198-f8c489d2c862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030830526s
Feb  3 21:29:41.258: INFO: Pod "pod-1ad25829-a32a-4450-8198-f8c489d2c862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034185457s
STEP: Saw pod success
Feb  3 21:29:41.258: INFO: Pod "pod-1ad25829-a32a-4450-8198-f8c489d2c862" satisfied condition "success or failure"
Feb  3 21:29:41.282: INFO: Trying to get logs from node jerma-worker2 pod pod-1ad25829-a32a-4450-8198-f8c489d2c862 container test-container: 
STEP: delete the pod
Feb  3 21:29:41.313: INFO: Waiting for pod pod-1ad25829-a32a-4450-8198-f8c489d2c862 to disappear
Feb  3 21:29:41.317: INFO: Pod pod-1ad25829-a32a-4450-8198-f8c489d2c862 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:29:41.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6255" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2599,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:29:41.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3188
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-3188
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3188
Feb  3 21:29:41.431: INFO: Found 0 stateful pods, waiting for 1
Feb  3 21:29:51.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  3 21:29:51.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:29:51.741: INFO: stderr: "I0203 21:29:51.577252    1931 log.go:172] (0xc0008f4790) (0xc000759ae0) Create stream\nI0203 21:29:51.577309    1931 log.go:172] (0xc0008f4790) (0xc000759ae0) Stream added, broadcasting: 1\nI0203 21:29:51.580050    1931 log.go:172] (0xc0008f4790) Reply frame received for 1\nI0203 21:29:51.580092    1931 log.go:172] (0xc0008f4790) (0xc000759d60) Create stream\nI0203 21:29:51.580102    1931 log.go:172] (0xc0008f4790) (0xc000759d60) Stream added, broadcasting: 3\nI0203 21:29:51.581157    1931 log.go:172] (0xc0008f4790) Reply frame received for 3\nI0203 21:29:51.581197    1931 log.go:172] (0xc0008f4790) (0xc000a2c000) Create stream\nI0203 21:29:51.581208    1931 log.go:172] (0xc0008f4790) (0xc000a2c000) Stream added, broadcasting: 5\nI0203 21:29:51.582137    1931 log.go:172] (0xc0008f4790) Reply frame received for 5\nI0203 21:29:51.679657    1931 log.go:172] (0xc0008f4790) Data frame received for 5\nI0203 21:29:51.679690    1931 log.go:172] (0xc000a2c000) (5) Data frame handling\nI0203 21:29:51.679712    1931 log.go:172] (0xc000a2c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:29:51.730667    1931 log.go:172] (0xc0008f4790) Data frame received for 3\nI0203 21:29:51.730709    1931 log.go:172] (0xc000759d60) (3) Data frame handling\nI0203 21:29:51.730769    1931 log.go:172] (0xc000759d60) (3) Data frame sent\nI0203 21:29:51.730995    1931 log.go:172] (0xc0008f4790) Data frame received for 3\nI0203 21:29:51.731049    1931 log.go:172] (0xc000759d60) (3) Data frame handling\nI0203 21:29:51.731336    1931 log.go:172] (0xc0008f4790) Data frame received for 5\nI0203 21:29:51.731374    1931 log.go:172] (0xc000a2c000) (5) Data frame handling\nI0203 21:29:51.733788    1931 log.go:172] (0xc0008f4790) Data frame received for 1\nI0203 21:29:51.733826    1931 log.go:172] (0xc000759ae0) (1) Data frame handling\nI0203 21:29:51.733842    1931 log.go:172] (0xc000759ae0) (1) Data frame sent\nI0203 21:29:51.733859    1931 log.go:172] (0xc0008f4790) (0xc000759ae0) Stream removed, broadcasting: 1\nI0203 21:29:51.733880    1931 log.go:172] (0xc0008f4790) Go away received\nI0203 21:29:51.734368    1931 log.go:172] (0xc0008f4790) (0xc000759ae0) Stream removed, broadcasting: 1\nI0203 21:29:51.734411    1931 log.go:172] (0xc0008f4790) (0xc000759d60) Stream removed, broadcasting: 3\nI0203 21:29:51.734436    1931 log.go:172] (0xc0008f4790) (0xc000a2c000) Stream removed, broadcasting: 5\n"
Feb  3 21:29:51.741: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:29:51.741: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 21:29:51.745: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 21:30:01.749: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 21:30:01.749: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 21:30:01.762: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Feb  3 21:30:01.762: INFO: ss-0  jerma-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  }]
Feb  3 21:30:01.762: INFO: 
Feb  3 21:30:01.762: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  3 21:30:02.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996824044s
Feb  3 21:30:03.786: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992883876s
Feb  3 21:30:04.791: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972671462s
Feb  3 21:30:05.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.968307813s
Feb  3 21:30:06.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963596849s
Feb  3 21:30:07.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.958616571s
Feb  3 21:30:08.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95442294s
Feb  3 21:30:09.812: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95019711s
Feb  3 21:30:10.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.591567ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3188
Feb  3 21:30:11.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 21:30:12.048: INFO: stderr: "I0203 21:30:11.966470    1954 log.go:172] (0xc000a27340) (0xc0009d2780) Create stream\nI0203 21:30:11.966541    1954 log.go:172] (0xc000a27340) (0xc0009d2780) Stream added, broadcasting: 1\nI0203 21:30:11.970546    1954 log.go:172] (0xc000a27340) Reply frame received for 1\nI0203 21:30:11.970581    1954 log.go:172] (0xc000a27340) (0xc0006ba780) Create stream\nI0203 21:30:11.970591    1954 log.go:172] (0xc000a27340) (0xc0006ba780) Stream added, broadcasting: 3\nI0203 21:30:11.971303    1954 log.go:172] (0xc000a27340) Reply frame received for 3\nI0203 21:30:11.971347    1954 log.go:172] (0xc000a27340) (0xc00053f540) Create stream\nI0203 21:30:11.971356    1954 log.go:172] (0xc000a27340) (0xc00053f540) Stream added, broadcasting: 5\nI0203 21:30:11.972070    1954 log.go:172] (0xc000a27340) Reply frame received for 5\nI0203 21:30:12.041395    1954 log.go:172] (0xc000a27340) Data frame received for 5\nI0203 21:30:12.041429    1954 log.go:172] (0xc00053f540) (5) Data frame handling\nI0203 21:30:12.041451    1954 log.go:172] (0xc00053f540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 21:30:12.041487    1954 log.go:172] (0xc000a27340) Data frame received for 3\nI0203 21:30:12.041547    1954 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0203 21:30:12.041570    1954 log.go:172] (0xc0006ba780) (3) Data frame sent\nI0203 21:30:12.041589    1954 log.go:172] (0xc000a27340) Data frame received for 3\nI0203 21:30:12.041607    1954 log.go:172] (0xc0006ba780) (3) Data frame handling\nI0203 21:30:12.041658    1954 log.go:172] (0xc000a27340) Data frame received for 5\nI0203 21:30:12.041682    1954 log.go:172] (0xc00053f540) (5) Data frame handling\nI0203 21:30:12.042929    1954 log.go:172] (0xc000a27340) Data frame received for 1\nI0203 21:30:12.042962    1954 log.go:172] (0xc0009d2780) (1) Data frame handling\nI0203 21:30:12.042996    1954 log.go:172] (0xc0009d2780) (1) Data frame sent\nI0203 21:30:12.043023    1954 log.go:172] (0xc000a27340) (0xc0009d2780) Stream removed, broadcasting: 1\nI0203 21:30:12.043058    1954 log.go:172] (0xc000a27340) Go away received\nI0203 21:30:12.043511    1954 log.go:172] (0xc000a27340) (0xc0009d2780) Stream removed, broadcasting: 1\nI0203 21:30:12.043536    1954 log.go:172] (0xc000a27340) (0xc0006ba780) Stream removed, broadcasting: 3\nI0203 21:30:12.043557    1954 log.go:172] (0xc000a27340) (0xc00053f540) Stream removed, broadcasting: 5\n"
Feb  3 21:30:12.048: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 21:30:12.048: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 21:30:12.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 21:30:12.288: INFO: stderr: "I0203 21:30:12.218581    1973 log.go:172] (0xc00054cb00) (0xc000615c20) Create stream\nI0203 21:30:12.218629    1973 log.go:172] (0xc00054cb00) (0xc000615c20) Stream added, broadcasting: 1\nI0203 21:30:12.220259    1973 log.go:172] (0xc00054cb00) Reply frame received for 1\nI0203 21:30:12.220307    1973 log.go:172] (0xc00054cb00) (0xc000920000) Create stream\nI0203 21:30:12.220319    1973 log.go:172] (0xc00054cb00) (0xc000920000) Stream added, broadcasting: 3\nI0203 21:30:12.221181    1973 log.go:172] (0xc00054cb00) Reply frame received for 3\nI0203 21:30:12.221210    1973 log.go:172] (0xc00054cb00) (0xc0006f6000) Create stream\nI0203 21:30:12.221220    1973 log.go:172] (0xc00054cb00) (0xc0006f6000) Stream added, broadcasting: 5\nI0203 21:30:12.222088    1973 log.go:172] (0xc00054cb00) Reply frame received for 5\nI0203 21:30:12.281314    1973 log.go:172] (0xc00054cb00) Data frame received for 5\nI0203 21:30:12.281370    1973 log.go:172] (0xc0006f6000) (5) Data frame handling\nI0203 21:30:12.281390    1973 log.go:172] (0xc0006f6000) (5) Data frame sent\nI0203 21:30:12.281406    1973 log.go:172] (0xc00054cb00) Data frame received for 5\nI0203 21:30:12.281418    1973 log.go:172] (0xc0006f6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 21:30:12.281456    1973 log.go:172] (0xc00054cb00) Data frame received for 3\nI0203 21:30:12.281491    1973 log.go:172] (0xc000920000) (3) Data frame handling\nI0203 21:30:12.281529    1973 log.go:172] (0xc000920000) (3) Data frame sent\nI0203 21:30:12.281551    1973 log.go:172] (0xc00054cb00) Data frame received for 3\nI0203 21:30:12.281571    1973 log.go:172] (0xc000920000) (3) Data frame handling\nI0203 21:30:12.283340    1973 log.go:172] (0xc00054cb00) Data frame received for 1\nI0203 21:30:12.283362    1973 log.go:172] (0xc000615c20) (1) Data frame handling\nI0203 21:30:12.283381    1973 log.go:172] (0xc000615c20) (1) Data frame sent\nI0203 21:30:12.283493    1973 log.go:172] (0xc00054cb00) (0xc000615c20) Stream removed, broadcasting: 1\nI0203 21:30:12.283687    1973 log.go:172] (0xc00054cb00) Go away received\nI0203 21:30:12.283876    1973 log.go:172] (0xc00054cb00) (0xc000615c20) Stream removed, broadcasting: 1\nI0203 21:30:12.283894    1973 log.go:172] (0xc00054cb00) (0xc000920000) Stream removed, broadcasting: 3\nI0203 21:30:12.283900    1973 log.go:172] (0xc00054cb00) (0xc0006f6000) Stream removed, broadcasting: 5\n"
Feb  3 21:30:12.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 21:30:12.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 21:30:12.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 21:30:12.517: INFO: stderr: "I0203 21:30:12.428902    1989 log.go:172] (0xc000a2a0b0) (0xc00052b4a0) Create stream\nI0203 21:30:12.428951    1989 log.go:172] (0xc000a2a0b0) (0xc00052b4a0) Stream added, broadcasting: 1\nI0203 21:30:12.431456    1989 log.go:172] (0xc000a2a0b0) Reply frame received for 1\nI0203 21:30:12.431497    1989 log.go:172] (0xc000a2a0b0) (0xc000a4c000) Create stream\nI0203 21:30:12.431507    1989 log.go:172] (0xc000a2a0b0) (0xc000a4c000) Stream added, broadcasting: 3\nI0203 21:30:12.432531    1989 log.go:172] (0xc000a2a0b0) Reply frame received for 3\nI0203 21:30:12.432581    1989 log.go:172] (0xc000a2a0b0) (0xc00098e000) Create stream\nI0203 21:30:12.432594    1989 log.go:172] (0xc000a2a0b0) (0xc00098e000) Stream added, broadcasting: 5\nI0203 21:30:12.433844    1989 log.go:172] (0xc000a2a0b0) Reply frame received for 5\nI0203 21:30:12.510814    1989 log.go:172] (0xc000a2a0b0) Data frame received for 3\nI0203 21:30:12.510851    1989 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0203 21:30:12.510859    1989 log.go:172] (0xc000a4c000) (3) Data frame sent\nI0203 21:30:12.510864    1989 log.go:172] (0xc000a2a0b0) Data frame received for 3\nI0203 21:30:12.510869    1989 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0203 21:30:12.510891    1989 log.go:172] (0xc000a2a0b0) Data frame received for 5\nI0203 21:30:12.510896    1989 log.go:172] (0xc00098e000) (5) Data frame handling\nI0203 21:30:12.510902    1989 log.go:172] (0xc00098e000) (5) Data frame sent\nI0203 21:30:12.510906    1989 log.go:172] (0xc000a2a0b0) Data frame received for 5\nI0203 21:30:12.510911    1989 log.go:172] (0xc00098e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 21:30:12.512223    1989 log.go:172] (0xc000a2a0b0) Data frame received for 1\nI0203 21:30:12.512253    1989 log.go:172] (0xc00052b4a0) (1) Data frame handling\nI0203 21:30:12.512280    1989 log.go:172] (0xc00052b4a0) (1) Data frame sent\nI0203 21:30:12.512306    1989 log.go:172] (0xc000a2a0b0) (0xc00052b4a0) Stream removed, broadcasting: 1\nI0203 21:30:12.512342    1989 log.go:172] (0xc000a2a0b0) Go away received\nI0203 21:30:12.512709    1989 log.go:172] (0xc000a2a0b0) (0xc00052b4a0) Stream removed, broadcasting: 1\nI0203 21:30:12.512736    1989 log.go:172] (0xc000a2a0b0) (0xc000a4c000) Stream removed, broadcasting: 3\nI0203 21:30:12.512753    1989 log.go:172] (0xc000a2a0b0) (0xc00098e000) Stream removed, broadcasting: 5\n"
Feb  3 21:30:12.518: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 21:30:12.518: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 21:30:12.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:30:12.522: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:30:12.522: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  3 21:30:12.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:30:12.722: INFO: stderr: "I0203 21:30:12.654183    2010 log.go:172] (0xc000a28630) (0xc0009a60a0) Create stream\nI0203 21:30:12.654229    2010 log.go:172] (0xc000a28630) (0xc0009a60a0) Stream added, broadcasting: 1\nI0203 21:30:12.656252    2010 log.go:172] (0xc000a28630) Reply frame received for 1\nI0203 21:30:12.656296    2010 log.go:172] (0xc000a28630) (0xc0009a6140) Create stream\nI0203 21:30:12.656311    2010 log.go:172] (0xc000a28630) (0xc0009a6140) Stream added, broadcasting: 3\nI0203 21:30:12.657163    2010 log.go:172] (0xc000a28630) Reply frame received for 3\nI0203 21:30:12.657193    2010 log.go:172] (0xc000a28630) (0xc00059d400) Create stream\nI0203 21:30:12.657203    2010 log.go:172] (0xc000a28630) (0xc00059d400) Stream added, broadcasting: 5\nI0203 21:30:12.657929    2010 log.go:172] (0xc000a28630) Reply frame received for 5\nI0203 21:30:12.713566    2010 log.go:172] (0xc000a28630) Data frame received for 5\nI0203 21:30:12.713599    2010 log.go:172] (0xc000a28630) Data frame received for 3\nI0203 21:30:12.713623    2010 log.go:172] (0xc0009a6140) (3) Data frame handling\nI0203 21:30:12.713637    2010 log.go:172] (0xc0009a6140) (3) Data frame sent\nI0203 21:30:12.713643    2010 log.go:172] (0xc000a28630) Data frame received for 3\nI0203 21:30:12.713649    2010 log.go:172] (0xc0009a6140) (3) Data frame handling\nI0203 21:30:12.713676    2010 log.go:172] (0xc00059d400) (5) Data frame handling\nI0203 21:30:12.713688    2010 log.go:172] (0xc00059d400) (5) Data frame sent\nI0203 21:30:12.713707    2010 log.go:172] (0xc000a28630) Data frame received for 5\nI0203 21:30:12.713723    2010 log.go:172] (0xc00059d400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:30:12.715324    2010 log.go:172] (0xc000a28630) Data frame received for 1\nI0203 21:30:12.715362    2010 log.go:172] (0xc0009a60a0) (1) Data frame handling\nI0203 21:30:12.715379    2010 log.go:172] (0xc0009a60a0) (1) Data frame sent\nI0203 21:30:12.715397    2010 log.go:172] (0xc000a28630) (0xc0009a60a0) Stream removed, broadcasting: 1\nI0203 21:30:12.715471    2010 log.go:172] (0xc000a28630) Go away received\nI0203 21:30:12.715812    2010 log.go:172] (0xc000a28630) (0xc0009a60a0) Stream removed, broadcasting: 1\nI0203 21:30:12.715829    2010 log.go:172] (0xc000a28630) (0xc0009a6140) Stream removed, broadcasting: 3\nI0203 21:30:12.715838    2010 log.go:172] (0xc000a28630) (0xc00059d400) Stream removed, broadcasting: 5\n"
Feb  3 21:30:12.722: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:30:12.722: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 21:30:12.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:30:12.934: INFO: stderr: "I0203 21:30:12.840058    2031 log.go:172] (0xc0000f62c0) (0xc0006b4820) Create stream\nI0203 21:30:12.840122    2031 log.go:172] (0xc0000f62c0) (0xc0006b4820) Stream added, broadcasting: 1\nI0203 21:30:12.842483    2031 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0203 21:30:12.842511    2031 log.go:172] (0xc0000f62c0) (0xc00053f5e0) Create stream\nI0203 21:30:12.842518    2031 log.go:172] (0xc0000f62c0) (0xc00053f5e0) Stream added, broadcasting: 3\nI0203 21:30:12.843243    2031 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0203 21:30:12.843270    2031 log.go:172] (0xc0000f62c0) (0xc000727f40) Create stream\nI0203 21:30:12.843289    2031 log.go:172] (0xc0000f62c0) (0xc000727f40) Stream added, broadcasting: 5\nI0203 21:30:12.843923    2031 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0203 21:30:12.900996    2031 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0203 21:30:12.901027    2031 log.go:172] (0xc000727f40) (5) Data frame handling\nI0203 21:30:12.901049    2031 log.go:172] (0xc000727f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:30:12.925114    2031 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0203 21:30:12.925153    2031 log.go:172] (0xc00053f5e0) (3) Data frame handling\nI0203 21:30:12.925186    2031 log.go:172] (0xc00053f5e0) (3) Data frame sent\nI0203 21:30:12.925439    2031 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0203 21:30:12.925480    2031 log.go:172] (0xc00053f5e0) (3) Data frame handling\nI0203 21:30:12.925656    2031 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0203 21:30:12.925679    2031 log.go:172] (0xc000727f40) (5) Data frame handling\nI0203 21:30:12.927201    2031 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0203 21:30:12.927234    2031 log.go:172] (0xc0006b4820) (1) Data frame handling\nI0203 21:30:12.927281    2031 log.go:172] (0xc0006b4820) (1) Data frame sent\nI0203 21:30:12.927313    2031 log.go:172] (0xc0000f62c0) (0xc0006b4820) Stream removed, broadcasting: 1\nI0203 21:30:12.927333    2031 log.go:172] (0xc0000f62c0) Go away received\nI0203 21:30:12.927713    2031 log.go:172] (0xc0000f62c0) (0xc0006b4820) Stream removed, broadcasting: 1\nI0203 21:30:12.927736    2031 log.go:172] (0xc0000f62c0) (0xc00053f5e0) Stream removed, broadcasting: 3\nI0203 21:30:12.927747    2031 log.go:172] (0xc0000f62c0) (0xc000727f40) Stream removed, broadcasting: 5\n"
Feb  3 21:30:12.934: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:30:12.934: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 21:30:12.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3188 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:30:13.179: INFO: stderr: "I0203 21:30:13.063394    2051 log.go:172] (0xc000a23340) (0xc0009466e0) Create stream\nI0203 21:30:13.063479    2051 log.go:172] (0xc000a23340) (0xc0009466e0) Stream added, broadcasting: 1\nI0203 21:30:13.068419    2051 log.go:172] (0xc000a23340) Reply frame received for 1\nI0203 21:30:13.068458    2051 log.go:172] (0xc000a23340) (0xc0005aa780) Create stream\nI0203 21:30:13.068483    2051 log.go:172] (0xc000a23340) (0xc0005aa780) Stream added, broadcasting: 3\nI0203 21:30:13.069791    2051 log.go:172] (0xc000a23340) Reply frame received for 3\nI0203 21:30:13.069850    2051 log.go:172] (0xc000a23340) (0xc0002a9540) Create stream\nI0203 21:30:13.069874    2051 log.go:172] (0xc000a23340) (0xc0002a9540) Stream added, broadcasting: 5\nI0203 21:30:13.071023    2051 log.go:172] (0xc000a23340) Reply frame received for 5\nI0203 21:30:13.132069    2051 log.go:172] (0xc000a23340) Data frame received for 5\nI0203 21:30:13.132097    2051 log.go:172] (0xc0002a9540) (5) Data frame handling\nI0203 21:30:13.132116    2051 log.go:172] (0xc0002a9540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:30:13.171157    2051 log.go:172] (0xc000a23340) Data frame received for 3\nI0203 21:30:13.171182    2051 log.go:172] (0xc0005aa780) (3) Data frame handling\nI0203 21:30:13.171192    2051 log.go:172] (0xc0005aa780) (3) Data frame sent\nI0203 21:30:13.171301    2051 log.go:172] (0xc000a23340) Data frame received for 3\nI0203 21:30:13.171350    2051 log.go:172] (0xc0005aa780) (3) Data frame handling\nI0203 21:30:13.171456    2051 log.go:172] (0xc000a23340) Data frame received for 5\nI0203 21:30:13.171478    2051 log.go:172] (0xc0002a9540) (5) Data frame handling\nI0203 21:30:13.173096    2051 log.go:172] (0xc000a23340) Data frame received for 1\nI0203 21:30:13.173109    2051 log.go:172] (0xc0009466e0) (1) Data frame handling\nI0203 21:30:13.173116    2051 log.go:172] (0xc0009466e0) (1) Data frame sent\nI0203 21:30:13.173123    2051 log.go:172] (0xc000a23340) (0xc0009466e0) Stream removed, broadcasting: 1\nI0203 21:30:13.173131    2051 log.go:172] (0xc000a23340) Go away received\nI0203 21:30:13.173494    2051 log.go:172] (0xc000a23340) (0xc0009466e0) Stream removed, broadcasting: 1\nI0203 21:30:13.173512    2051 log.go:172] (0xc000a23340) (0xc0005aa780) Stream removed, broadcasting: 3\nI0203 21:30:13.173520    2051 log.go:172] (0xc000a23340) (0xc0002a9540) Stream removed, broadcasting: 5\n"
Feb  3 21:30:13.179: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:30:13.179: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 21:30:13.179: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 21:30:13.186: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  3 21:30:23.202: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 21:30:23.202: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 21:30:23.202: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 21:30:23.230: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Feb  3 21:30:23.231: INFO: ss-0  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  }]
Feb  3 21:30:23.231: INFO: ss-1  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:23.231: INFO: ss-2  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:23.231: INFO: 
Feb  3 21:30:23.231: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 21:30:24.404: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Feb  3 21:30:24.404: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  }]
Feb  3 21:30:24.404: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:24.404: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:24.404: INFO: 
Feb  3 21:30:24.404: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 21:30:25.409: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Feb  3 21:30:25.409: INFO: ss-0  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:29:41 +0000 UTC  }]
Feb  3 21:30:25.409: INFO: ss-1  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:25.409: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:25.409: INFO: 
Feb  3 21:30:25.409: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  3 21:30:26.414: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Feb  3 21:30:26.414: INFO: ss-1  jerma-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:26.414: INFO: ss-2  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 21:30:01 +0000 UTC  }]
Feb  3 21:30:26.414: INFO: 
Feb  3 21:30:26.414: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  3 21:30:27.420: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.794066357s
Feb  3 21:30:28.425: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.787587677s
Feb  3 21:30:29.429: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.783500714s
Feb  3 21:30:30.433: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.779499174s
Feb  3 21:30:31.436: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.775562989s
Feb  3 21:30:32.440: INFO: Verifying statefulset ss doesn't scale past 0 for another 771.642409ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3188
Feb  3 21:30:33.444: INFO: Scaling statefulset ss to 0
Feb  3 21:30:33.452: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 21:30:33.455: INFO: Deleting all statefulset in ns statefulset-3188
Feb  3 21:30:33.457: INFO: Scaling statefulset ss to 0
Feb  3 21:30:33.464: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 21:30:33.466: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:30:33.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3188" for this suite.

• [SLOW TEST:52.169 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":152,"skipped":2600,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:30:33.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  3 21:30:41.637: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:41.654: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 21:30:43.655: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:43.659: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 21:30:45.655: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:45.659: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 21:30:47.655: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:47.658: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 21:30:49.655: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:49.659: INFO: Pod pod-with-prestop-http-hook still exists
Feb  3 21:30:51.655: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  3 21:30:51.658: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:30:51.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7885" for this suite.

• [SLOW TEST:18.175 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2603,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:30:51.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:30:51.722: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  3 21:30:51.768: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  3 21:30:56.771: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  3 21:30:56.771: INFO: Creating deployment "test-rolling-update-deployment"
Feb  3 21:30:56.774: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  3 21:30:56.804: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  3 21:30:58.832: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  3 21:30:58.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984656, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984656, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984656, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984656, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:31:00.838: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  3 21:31:00.845: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8876 /apis/apps/v1/namespaces/deployment-8876/deployments/test-rolling-update-deployment 5a99a334-77bc-4b80-8cf2-c0dd501875e6 6390068 1 2021-02-03 21:30:56 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005d42498  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-03 21:30:56 +0000 UTC,LastTransitionTime:2021-02-03 21:30:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2021-02-03 21:31:00 +0000 UTC,LastTransitionTime:2021-02-03 21:30:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb  3 21:31:00.847: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-8876 /apis/apps/v1/namespaces/deployment-8876/replicasets/test-rolling-update-deployment-67cf4f6444 aa4ef043-af32-486e-a35c-db1e1a0c3f77 6390056 1 2021-02-03 21:30:56 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 5a99a334-77bc-4b80-8cf2-c0dd501875e6 0xc005d42937 0xc005d42938}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005d429a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  3 21:31:00.847: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  3 21:31:00.847: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8876 /apis/apps/v1/namespaces/deployment-8876/replicasets/test-rolling-update-controller 8c66b61d-a9cc-43f5-8b53-350839374a21 6390066 2 2021-02-03 21:30:51 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 5a99a334-77bc-4b80-8cf2-c0dd501875e6 0xc005d42867 0xc005d42868}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005d428c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  3 21:31:00.850: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ms8dn" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ms8dn test-rolling-update-deployment-67cf4f6444- deployment-8876 /api/v1/namespaces/deployment-8876/pods/test-rolling-update-deployment-67cf4f6444-ms8dn 8b155e12-d113-47c3-a18d-0ec930f466a7 6390055 0 2021-02-03 21:30:56 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 aa4ef043-af32-486e-a35c-db1e1a0c3f77 0xc005d42de7 0xc005d42de8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hlm9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hlm9g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hlm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:30:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:31:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:31:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:30:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.54,StartTime:2021-02-03 21:30:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 21:30:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c0747ae58b6b2f9c5efe87032487082644bbb65827511e310570ca5abdddd5ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:31:00.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8876" for this suite.

• [SLOW TEST:9.186 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":154,"skipped":2607,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:31:00.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:31:01.285: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ac6a1618-c110-432a-be7c-feb36aaec3a3" in namespace "security-context-test-2787" to be "success or failure"
Feb  3 21:31:01.325: INFO: Pod "busybox-readonly-false-ac6a1618-c110-432a-be7c-feb36aaec3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 40.138794ms
Feb  3 21:31:03.391: INFO: Pod "busybox-readonly-false-ac6a1618-c110-432a-be7c-feb36aaec3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106378466s
Feb  3 21:31:05.395: INFO: Pod "busybox-readonly-false-ac6a1618-c110-432a-be7c-feb36aaec3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109902597s
Feb  3 21:31:05.395: INFO: Pod "busybox-readonly-false-ac6a1618-c110-432a-be7c-feb36aaec3a3" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:31:05.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2787" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2622,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:31:05.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-612
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-612 to expose endpoints map[]
Feb  3 21:31:05.540: INFO: Get endpoints failed (4.516498ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  3 21:31:06.575: INFO: successfully validated that service endpoint-test2 in namespace services-612 exposes endpoints map[] (1.038627408s elapsed)
STEP: Creating pod pod1 in namespace services-612
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-612 to expose endpoints map[pod1:[80]]
Feb  3 21:31:10.710: INFO: successfully validated that service endpoint-test2 in namespace services-612 exposes endpoints map[pod1:[80]] (4.129861472s elapsed)
STEP: Creating pod pod2 in namespace services-612
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-612 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  3 21:31:14.771: INFO: successfully validated that service endpoint-test2 in namespace services-612 exposes endpoints map[pod1:[80] pod2:[80]] (4.056516888s elapsed)
STEP: Deleting pod pod1 in namespace services-612
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-612 to expose endpoints map[pod2:[80]]
Feb  3 21:31:15.824: INFO: successfully validated that service endpoint-test2 in namespace services-612 exposes endpoints map[pod2:[80]] (1.04827149s elapsed)
STEP: Deleting pod pod2 in namespace services-612
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-612 to expose endpoints map[]
Feb  3 21:31:16.856: INFO: successfully validated that service endpoint-test2 in namespace services-612 exposes endpoints map[] (1.026750861s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:31:17.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-612" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.639 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":156,"skipped":2627,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:31:17.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-8f519624-3898-4041-97de-ca242c8d6e37 in namespace container-probe-7192
Feb  3 21:31:21.218: INFO: Started pod liveness-8f519624-3898-4041-97de-ca242c8d6e37 in namespace container-probe-7192
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 21:31:21.221: INFO: Initial restart count of pod liveness-8f519624-3898-4041-97de-ca242c8d6e37 is 0
Feb  3 21:31:37.255: INFO: Restart count of pod container-probe-7192/liveness-8f519624-3898-4041-97de-ca242c8d6e37 is now 1 (16.034219762s elapsed)
Feb  3 21:31:57.296: INFO: Restart count of pod container-probe-7192/liveness-8f519624-3898-4041-97de-ca242c8d6e37 is now 2 (36.074940638s elapsed)
Feb  3 21:32:17.336: INFO: Restart count of pod container-probe-7192/liveness-8f519624-3898-4041-97de-ca242c8d6e37 is now 3 (56.11492742s elapsed)
Feb  3 21:32:37.393: INFO: Restart count of pod container-probe-7192/liveness-8f519624-3898-4041-97de-ca242c8d6e37 is now 4 (1m16.172212139s elapsed)
Feb  3 21:33:49.746: INFO: Restart count of pod container-probe-7192/liveness-8f519624-3898-4041-97de-ca242c8d6e37 is now 5 (2m28.524867427s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:33:49.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7192" for this suite.

• [SLOW TEST:152.726 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2663,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:33:49.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb  3 21:33:49.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8551'
Feb  3 21:33:50.384: INFO: stderr: ""
Feb  3 21:33:50.384: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 21:33:50.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:33:50.684: INFO: stderr: ""
Feb  3 21:33:50.684: INFO: stdout: "update-demo-nautilus-9dgqr update-demo-nautilus-bsnkn "
Feb  3 21:33:50.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:33:50.786: INFO: stderr: ""
Feb  3 21:33:50.786: INFO: stdout: ""
Feb  3 21:33:50.786: INFO: update-demo-nautilus-9dgqr is created but not running
Feb  3 21:33:55.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:33:55.882: INFO: stderr: ""
Feb  3 21:33:55.882: INFO: stdout: "update-demo-nautilus-9dgqr update-demo-nautilus-bsnkn "
Feb  3 21:33:55.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:33:55.975: INFO: stderr: ""
Feb  3 21:33:55.975: INFO: stdout: "true"
Feb  3 21:33:55.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:33:56.068: INFO: stderr: ""
Feb  3 21:33:56.068: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 21:33:56.068: INFO: validating pod update-demo-nautilus-9dgqr
Feb  3 21:33:56.072: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 21:33:56.072: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 21:33:56.072: INFO: update-demo-nautilus-9dgqr is verified up and running
Feb  3 21:33:56.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bsnkn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:33:56.158: INFO: stderr: ""
Feb  3 21:33:56.158: INFO: stdout: "true"
Feb  3 21:33:56.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bsnkn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:33:56.270: INFO: stderr: ""
Feb  3 21:33:56.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 21:33:56.270: INFO: validating pod update-demo-nautilus-bsnkn
Feb  3 21:33:56.274: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 21:33:56.274: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 21:33:56.274: INFO: update-demo-nautilus-bsnkn is verified up and running
STEP: scaling down the replication controller
Feb  3 21:33:56.277: INFO: scanned /root for discovery docs: 
Feb  3 21:33:56.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8551'
Feb  3 21:33:57.439: INFO: stderr: ""
Feb  3 21:33:57.439: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 21:33:57.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:33:57.591: INFO: stderr: ""
Feb  3 21:33:57.591: INFO: stdout: "update-demo-nautilus-9dgqr update-demo-nautilus-bsnkn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  3 21:34:02.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:34:02.679: INFO: stderr: ""
Feb  3 21:34:02.679: INFO: stdout: "update-demo-nautilus-9dgqr "
Feb  3 21:34:02.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:02.771: INFO: stderr: ""
Feb  3 21:34:02.771: INFO: stdout: "true"
Feb  3 21:34:02.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:02.861: INFO: stderr: ""
Feb  3 21:34:02.861: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 21:34:02.861: INFO: validating pod update-demo-nautilus-9dgqr
Feb  3 21:34:02.864: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 21:34:02.864: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 21:34:02.864: INFO: update-demo-nautilus-9dgqr is verified up and running
STEP: scaling up the replication controller
Feb  3 21:34:02.867: INFO: scanned /root for discovery docs: 
Feb  3 21:34:02.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8551'
Feb  3 21:34:03.989: INFO: stderr: ""
Feb  3 21:34:03.989: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 21:34:03.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:34:04.077: INFO: stderr: ""
Feb  3 21:34:04.077: INFO: stdout: "update-demo-nautilus-8r25s update-demo-nautilus-9dgqr "
Feb  3 21:34:04.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8r25s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:04.166: INFO: stderr: ""
Feb  3 21:34:04.166: INFO: stdout: ""
Feb  3 21:34:04.166: INFO: update-demo-nautilus-8r25s is created but not running
Feb  3 21:34:09.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8551'
Feb  3 21:34:09.278: INFO: stderr: ""
Feb  3 21:34:09.278: INFO: stdout: "update-demo-nautilus-8r25s update-demo-nautilus-9dgqr "
Feb  3 21:34:09.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8r25s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:09.368: INFO: stderr: ""
Feb  3 21:34:09.368: INFO: stdout: "true"
Feb  3 21:34:09.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8r25s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:09.478: INFO: stderr: ""
Feb  3 21:34:09.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 21:34:09.478: INFO: validating pod update-demo-nautilus-8r25s
Feb  3 21:34:09.483: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 21:34:09.483: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 21:34:09.483: INFO: update-demo-nautilus-8r25s is verified up and running
Feb  3 21:34:09.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:09.592: INFO: stderr: ""
Feb  3 21:34:09.592: INFO: stdout: "true"
Feb  3 21:34:09.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dgqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8551'
Feb  3 21:34:09.684: INFO: stderr: ""
Feb  3 21:34:09.684: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 21:34:09.684: INFO: validating pod update-demo-nautilus-9dgqr
Feb  3 21:34:09.688: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 21:34:09.688: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 21:34:09.688: INFO: update-demo-nautilus-9dgqr is verified up and running
STEP: using delete to clean up resources
Feb  3 21:34:09.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8551'
Feb  3 21:34:09.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:34:09.796: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  3 21:34:09.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8551'
Feb  3 21:34:09.906: INFO: stderr: "No resources found in kubectl-8551 namespace.\n"
Feb  3 21:34:09.906: INFO: stdout: ""
Feb  3 21:34:09.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8551 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 21:34:10.006: INFO: stderr: ""
Feb  3 21:34:10.006: INFO: stdout: "update-demo-nautilus-8r25s\nupdate-demo-nautilus-9dgqr\n"
Feb  3 21:34:10.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8551'
Feb  3 21:34:10.617: INFO: stderr: "No resources found in kubectl-8551 namespace.\n"
Feb  3 21:34:10.617: INFO: stdout: ""
Feb  3 21:34:10.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8551 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 21:34:10.727: INFO: stderr: ""
Feb  3 21:34:10.727: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:10.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8551" for this suite.

• [SLOW TEST:20.965 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":158,"skipped":2672,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:10.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:34:11.263: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:34:13.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984851, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984851, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984851, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984851, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:34:16.449: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:16.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2649" for this suite.
STEP: Destroying namespace "webhook-2649-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.782 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":159,"skipped":2682,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:16.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3d9d681b-6035-4ca3-af58-17ed35bc0cbe
STEP: Creating a pod to test consume secrets
Feb  3 21:34:16.632: INFO: Waiting up to 5m0s for pod "pod-secrets-ad21701e-8e01-4351-b869-c856516ed646" in namespace "secrets-5124" to be "success or failure"
Feb  3 21:34:16.636: INFO: Pod "pod-secrets-ad21701e-8e01-4351-b869-c856516ed646": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61873ms
Feb  3 21:34:18.639: INFO: Pod "pod-secrets-ad21701e-8e01-4351-b869-c856516ed646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007326731s
Feb  3 21:34:20.643: INFO: Pod "pod-secrets-ad21701e-8e01-4351-b869-c856516ed646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011343213s
STEP: Saw pod success
Feb  3 21:34:20.644: INFO: Pod "pod-secrets-ad21701e-8e01-4351-b869-c856516ed646" satisfied condition "success or failure"
Feb  3 21:34:20.646: INFO: Trying to get logs from node jerma-worker pod pod-secrets-ad21701e-8e01-4351-b869-c856516ed646 container secret-env-test: 
STEP: delete the pod
Feb  3 21:34:20.752: INFO: Waiting for pod pod-secrets-ad21701e-8e01-4351-b869-c856516ed646 to disappear
Feb  3 21:34:20.844: INFO: Pod pod-secrets-ad21701e-8e01-4351-b869-c856516ed646 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5124" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2684,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:20.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7672.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7672.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:34:27.069: INFO: DNS probes using dns-7672/dns-test-a16522ce-ca3b-425b-a25a-08f0c7d7607d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:27.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7672" for this suite.

• [SLOW TEST:6.285 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":161,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:27.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:34:27.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec" in namespace "projected-7234" to be "success or failure"
Feb  3 21:34:27.623: INFO: Pod "downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec": Phase="Pending", Reason="", readiness=false. Elapsed: 47.879963ms
Feb  3 21:34:29.797: INFO: Pod "downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221570391s
Feb  3 21:34:31.800: INFO: Pod "downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225251241s
STEP: Saw pod success
Feb  3 21:34:31.800: INFO: Pod "downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec" satisfied condition "success or failure"
Feb  3 21:34:31.804: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec container client-container: 
STEP: delete the pod
Feb  3 21:34:31.841: INFO: Waiting for pod downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec to disappear
Feb  3 21:34:31.845: INFO: Pod downwardapi-volume-e426ecc2-ee49-4457-94a4-54745afe08ec no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:31.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7234" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2715,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:31.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Feb  3 21:34:32.051: INFO: Waiting up to 5m0s for pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d" in namespace "containers-8622" to be "success or failure"
Feb  3 21:34:32.063: INFO: Pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.95177ms
Feb  3 21:34:34.066: INFO: Pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015567426s
Feb  3 21:34:36.126: INFO: Pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d": Phase="Running", Reason="", readiness=true. Elapsed: 4.075418084s
Feb  3 21:34:38.130: INFO: Pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079199806s
STEP: Saw pod success
Feb  3 21:34:38.130: INFO: Pod "client-containers-c9c1442f-3506-48ba-a728-523799a50c4d" satisfied condition "success or failure"
Feb  3 21:34:38.132: INFO: Trying to get logs from node jerma-worker2 pod client-containers-c9c1442f-3506-48ba-a728-523799a50c4d container test-container: 
STEP: delete the pod
Feb  3 21:34:38.194: INFO: Waiting for pod client-containers-c9c1442f-3506-48ba-a728-523799a50c4d to disappear
Feb  3 21:34:38.270: INFO: Pod client-containers-c9c1442f-3506-48ba-a728-523799a50c4d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8622" for this suite.

• [SLOW TEST:6.473 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2767,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:38.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-6052a81a-ed75-4c4e-93bf-9eca30d7482d
STEP: Creating configMap with name cm-test-opt-upd-89f7fabd-dce4-4361-b9a3-519fc90756ff
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6052a81a-ed75-4c4e-93bf-9eca30d7482d
STEP: Updating configmap cm-test-opt-upd-89f7fabd-dce4-4361-b9a3-519fc90756ff
STEP: Creating configMap with name cm-test-opt-create-8aab161f-cd7b-4902-82da-9b6e3bbaccb8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:34:46.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7793" for this suite.

• [SLOW TEST:8.366 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2809,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:34:46.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 21:34:46.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6517'
Feb  3 21:34:46.848: INFO: stderr: ""
Feb  3 21:34:46.848: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb  3 21:34:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6517 -o json'
Feb  3 21:34:51.992: INFO: stderr: ""
Feb  3 21:34:51.992: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2021-02-03T21:34:46Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-6517\",\n        \"resourceVersion\": \"6391183\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6517/pods/e2e-test-httpd-pod\",\n        \"uid\": \"09748727-9dbb-4c48-8ec4-625fde6b5228\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-7sxwk\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-7sxwk\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-7sxwk\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-03T21:34:46Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-03T21:34:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-03T21:34:50Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-03T21:34:46Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://fbff42494cb554399e2f493739e92ba72fddbbee8e5947156d6349ae85ef9540\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-02-03T21:34:49Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.5\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.54\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.54\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-02-03T21:34:46Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  3 21:34:51.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6517'
Feb  3 21:34:52.441: INFO: stderr: ""
Feb  3 21:34:52.441: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Feb  3 21:34:52.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6517'
Feb  3 21:35:02.094: INFO: stderr: ""
Feb  3 21:35:02.094: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:02.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6517" for this suite.

• [SLOW TEST:15.406 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":165,"skipped":2829,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:02.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:35:02.197: INFO: Create a RollingUpdate DaemonSet
Feb  3 21:35:02.201: INFO: Check that daemon pods launch on every node of the cluster
Feb  3 21:35:02.207: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:02.210: INFO: Number of nodes with available pods: 0
Feb  3 21:35:02.210: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:35:03.216: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:03.219: INFO: Number of nodes with available pods: 0
Feb  3 21:35:03.219: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:35:04.216: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:04.219: INFO: Number of nodes with available pods: 0
Feb  3 21:35:04.219: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:35:05.215: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:05.218: INFO: Number of nodes with available pods: 0
Feb  3 21:35:05.218: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:35:06.215: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:06.218: INFO: Number of nodes with available pods: 1
Feb  3 21:35:06.218: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:35:07.221: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:07.242: INFO: Number of nodes with available pods: 2
Feb  3 21:35:07.243: INFO: Number of running nodes: 2, number of available pods: 2
Feb  3 21:35:07.243: INFO: Update the DaemonSet to trigger a rollout
Feb  3 21:35:07.304: INFO: Updating DaemonSet daemon-set
Feb  3 21:35:22.338: INFO: Roll back the DaemonSet before rollout is complete
Feb  3 21:35:22.345: INFO: Updating DaemonSet daemon-set
Feb  3 21:35:22.345: INFO: Make sure DaemonSet rollback is complete
Feb  3 21:35:22.355: INFO: Wrong image for pod: daemon-set-mmhdn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 21:35:22.355: INFO: Pod daemon-set-mmhdn is not available
Feb  3 21:35:22.390: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:23.395: INFO: Wrong image for pod: daemon-set-mmhdn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 21:35:23.395: INFO: Pod daemon-set-mmhdn is not available
Feb  3 21:35:23.399: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:24.708: INFO: Wrong image for pod: daemon-set-mmhdn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb  3 21:35:24.708: INFO: Pod daemon-set-mmhdn is not available
Feb  3 21:35:24.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:35:25.405: INFO: Pod daemon-set-pczrw is not available
Feb  3 21:35:25.410: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7003, will wait for the garbage collector to delete the pods
Feb  3 21:35:25.475: INFO: Deleting DaemonSet.extensions daemon-set took: 6.138473ms
Feb  3 21:35:25.875: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.353954ms
Feb  3 21:35:32.178: INFO: Number of nodes with available pods: 0
Feb  3 21:35:32.178: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 21:35:32.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7003/daemonsets","resourceVersion":"6391427"},"items":null}

Feb  3 21:35:32.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7003/pods","resourceVersion":"6391427"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:32.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7003" for this suite.

• [SLOW TEST:30.103 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":166,"skipped":2830,"failed":0}
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:32.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-9010
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9010
STEP: Deleting pre-stop pod
Feb  3 21:35:45.318: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:45.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9010" for this suite.

• [SLOW TEST:13.182 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":167,"skipped":2831,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:45.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:35:46.038: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:35:48.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984946, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984946, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984946, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747984946, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:35:51.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:51.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4088" for this suite.
STEP: Destroying namespace "webhook-4088-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.996 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":168,"skipped":2854,"failed":0}
SSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:51.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:35:51.474: INFO: Waiting up to 5m0s for pod "busybox-user-65534-05860832-9e09-4d20-aded-9ef027f64f25" in namespace "security-context-test-1412" to be "success or failure"
Feb  3 21:35:51.490: INFO: Pod "busybox-user-65534-05860832-9e09-4d20-aded-9ef027f64f25": Phase="Pending", Reason="", readiness=false. Elapsed: 16.80799ms
Feb  3 21:35:53.494: INFO: Pod "busybox-user-65534-05860832-9e09-4d20-aded-9ef027f64f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020422553s
Feb  3 21:35:55.498: INFO: Pod "busybox-user-65534-05860832-9e09-4d20-aded-9ef027f64f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024166335s
Feb  3 21:35:55.498: INFO: Pod "busybox-user-65534-05860832-9e09-4d20-aded-9ef027f64f25" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:55.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1412" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2857,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:55.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-b3193c90-5695-4fab-956c-15bdb61a98ba
STEP: Creating a pod to test consume configMaps
Feb  3 21:35:55.605: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850" in namespace "projected-9802" to be "success or failure"
Feb  3 21:35:55.624: INFO: Pod "pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850": Phase="Pending", Reason="", readiness=false. Elapsed: 18.600538ms
Feb  3 21:35:57.628: INFO: Pod "pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02277939s
Feb  3 21:35:59.633: INFO: Pod "pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027181778s
STEP: Saw pod success
Feb  3 21:35:59.633: INFO: Pod "pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850" satisfied condition "success or failure"
Feb  3 21:35:59.636: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 21:35:59.736: INFO: Waiting for pod pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850 to disappear
Feb  3 21:35:59.744: INFO: Pod pod-projected-configmaps-344f4ee3-47d3-4a54-a28f-2b0408657850 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:35:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9802" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2869,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:35:59.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-741 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-741;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-741 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-741;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-741.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-741.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-741.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-741.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-741.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-741.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-741.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-741.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.246_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-741 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-741;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-741 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-741;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-741.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-741.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-741.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-741.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-741.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-741.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-741.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-741.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-741.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-741.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.246_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:36:07.900: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.903: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.905: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.907: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.909: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.914: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.917: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.936: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.939: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.941: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.946: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.951: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:07.967: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:12.972: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.975: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.980: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.989: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:12.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.010: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.013: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.016: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.021: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.025: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.028: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:13.044: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:17.973: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.975: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.980: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.982: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:17.989: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.006: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.008: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.010: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.016: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.019: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.022: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:18.042: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:22.971: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.974: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.977: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.980: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.982: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:22.990: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.007: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.009: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.010: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.014: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:23.034: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:27.971: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.976: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.988: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:27.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.011: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.013: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.016: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.019: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.021: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.031: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:28.055: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:32.997: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.001: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.004: INFO: Unable to read wheezy_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.007: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.010: INFO: Unable to read wheezy_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.013: INFO: Unable to read wheezy_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.016: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.019: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.041: INFO: Unable to read jessie_udp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.043: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.046: INFO: Unable to read jessie_udp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.049: INFO: Unable to read jessie_tcp@dns-test-service.dns-741 from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.051: INFO: Unable to read jessie_udp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.054: INFO: Unable to read jessie_tcp@dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.059: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-741.svc from pod dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61: the server could not find the requested resource (get pods dns-test-14b66039-095b-4692-ab63-62f7116e7a61)
Feb  3 21:36:33.077: INFO: Lookups using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-741 wheezy_tcp@dns-test-service.dns-741 wheezy_udp@dns-test-service.dns-741.svc wheezy_tcp@dns-test-service.dns-741.svc wheezy_udp@_http._tcp.dns-test-service.dns-741.svc wheezy_tcp@_http._tcp.dns-test-service.dns-741.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-741 jessie_tcp@dns-test-service.dns-741 jessie_udp@dns-test-service.dns-741.svc jessie_tcp@dns-test-service.dns-741.svc jessie_udp@_http._tcp.dns-test-service.dns-741.svc jessie_tcp@_http._tcp.dns-test-service.dns-741.svc]

Feb  3 21:36:38.147: INFO: DNS probes using dns-741/dns-test-14b66039-095b-4692-ab63-62f7116e7a61 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:36:38.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-741" for this suite.

• [SLOW TEST:39.076 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":171,"skipped":2876,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:36:38.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:36:38.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9" in namespace "projected-6370" to be "success or failure"
Feb  3 21:36:38.900: INFO: Pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.651011ms
Feb  3 21:36:40.907: INFO: Pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034925607s
Feb  3 21:36:42.978: INFO: Pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.105309432s
Feb  3 21:36:44.982: INFO: Pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109587065s
STEP: Saw pod success
Feb  3 21:36:44.982: INFO: Pod "downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9" satisfied condition "success or failure"
Feb  3 21:36:44.985: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9 container client-container: 
STEP: delete the pod
Feb  3 21:36:45.037: INFO: Waiting for pod downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9 to disappear
Feb  3 21:36:45.058: INFO: Pod downwardapi-volume-7dcbac5a-2982-4865-b707-48c9be4f61c9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:36:45.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6370" for this suite.

• [SLOW TEST:6.237 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2879,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:36:45.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:36:49.317: INFO: Waiting up to 5m0s for pod "client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b" in namespace "pods-9567" to be "success or failure"
Feb  3 21:36:49.323: INFO: Pod "client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.814191ms
Feb  3 21:36:51.428: INFO: Pod "client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11057616s
Feb  3 21:36:53.432: INFO: Pod "client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114731129s
STEP: Saw pod success
Feb  3 21:36:53.432: INFO: Pod "client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b" satisfied condition "success or failure"
Feb  3 21:36:53.435: INFO: Trying to get logs from node jerma-worker pod client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b container env3cont: 
STEP: delete the pod
Feb  3 21:36:53.495: INFO: Waiting for pod client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b to disappear
Feb  3 21:36:53.509: INFO: Pod client-envvars-b290d4bf-d445-4f85-bfa6-473cdfba0b7b no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:36:53.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9567" for this suite.

• [SLOW TEST:8.454 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2882,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:36:53.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:36:57.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9322" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2883,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:36:57.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-7415
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 21:36:57.741: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 21:37:29.845: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7415 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:37:29.845: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:37:29.875787       6 log.go:172] (0xc002a4e2c0) (0xc001c21180) Create stream
I0203 21:37:29.875816       6 log.go:172] (0xc002a4e2c0) (0xc001c21180) Stream added, broadcasting: 1
I0203 21:37:29.877592       6 log.go:172] (0xc002a4e2c0) Reply frame received for 1
I0203 21:37:29.877623       6 log.go:172] (0xc002a4e2c0) (0xc001c21220) Create stream
I0203 21:37:29.877634       6 log.go:172] (0xc002a4e2c0) (0xc001c21220) Stream added, broadcasting: 3
I0203 21:37:29.878332       6 log.go:172] (0xc002a4e2c0) Reply frame received for 3
I0203 21:37:29.878360       6 log.go:172] (0xc002a4e2c0) (0xc00138c000) Create stream
I0203 21:37:29.878373       6 log.go:172] (0xc002a4e2c0) (0xc00138c000) Stream added, broadcasting: 5
I0203 21:37:29.879158       6 log.go:172] (0xc002a4e2c0) Reply frame received for 5
I0203 21:37:30.974214       6 log.go:172] (0xc002a4e2c0) Data frame received for 3
I0203 21:37:30.974296       6 log.go:172] (0xc001c21220) (3) Data frame handling
I0203 21:37:30.974331       6 log.go:172] (0xc001c21220) (3) Data frame sent
I0203 21:37:30.974373       6 log.go:172] (0xc002a4e2c0) Data frame received for 3
I0203 21:37:30.974416       6 log.go:172] (0xc001c21220) (3) Data frame handling
I0203 21:37:30.974457       6 log.go:172] (0xc002a4e2c0) Data frame received for 5
I0203 21:37:30.974487       6 log.go:172] (0xc00138c000) (5) Data frame handling
I0203 21:37:30.976920       6 log.go:172] (0xc002a4e2c0) Data frame received for 1
I0203 21:37:30.976964       6 log.go:172] (0xc001c21180) (1) Data frame handling
I0203 21:37:30.977009       6 log.go:172] (0xc001c21180) (1) Data frame sent
I0203 21:37:30.977032       6 log.go:172] (0xc002a4e2c0) (0xc001c21180) Stream removed, broadcasting: 1
I0203 21:37:30.977151       6 log.go:172] (0xc002a4e2c0) (0xc001c21180) Stream removed, broadcasting: 1
I0203 21:37:30.977167       6 log.go:172] (0xc002a4e2c0) (0xc001c21220) Stream removed, broadcasting: 3
I0203 21:37:30.977274       6 log.go:172] (0xc002a4e2c0) Go away received
I0203 21:37:30.977446       6 log.go:172] (0xc002a4e2c0) (0xc00138c000) Stream removed, broadcasting: 5
Feb  3 21:37:30.977: INFO: Found all expected endpoints: [netserver-0]
Feb  3 21:37:30.981: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7415 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:37:30.981: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:37:31.011217       6 log.go:172] (0xc002dda630) (0xc0024fe460) Create stream
I0203 21:37:31.011254       6 log.go:172] (0xc002dda630) (0xc0024fe460) Stream added, broadcasting: 1
I0203 21:37:31.013438       6 log.go:172] (0xc002dda630) Reply frame received for 1
I0203 21:37:31.013483       6 log.go:172] (0xc002dda630) (0xc0024fe5a0) Create stream
I0203 21:37:31.013502       6 log.go:172] (0xc002dda630) (0xc0024fe5a0) Stream added, broadcasting: 3
I0203 21:37:31.014873       6 log.go:172] (0xc002dda630) Reply frame received for 3
I0203 21:37:31.014933       6 log.go:172] (0xc002dda630) (0xc001c21400) Create stream
I0203 21:37:31.014955       6 log.go:172] (0xc002dda630) (0xc001c21400) Stream added, broadcasting: 5
I0203 21:37:31.016125       6 log.go:172] (0xc002dda630) Reply frame received for 5
I0203 21:37:32.118140       6 log.go:172] (0xc002dda630) Data frame received for 3
I0203 21:37:32.118186       6 log.go:172] (0xc0024fe5a0) (3) Data frame handling
I0203 21:37:32.118202       6 log.go:172] (0xc0024fe5a0) (3) Data frame sent
I0203 21:37:32.118239       6 log.go:172] (0xc002dda630) Data frame received for 3
I0203 21:37:32.118248       6 log.go:172] (0xc0024fe5a0) (3) Data frame handling
I0203 21:37:32.118281       6 log.go:172] (0xc002dda630) Data frame received for 5
I0203 21:37:32.118316       6 log.go:172] (0xc001c21400) (5) Data frame handling
I0203 21:37:32.120125       6 log.go:172] (0xc002dda630) Data frame received for 1
I0203 21:37:32.120168       6 log.go:172] (0xc0024fe460) (1) Data frame handling
I0203 21:37:32.120191       6 log.go:172] (0xc0024fe460) (1) Data frame sent
I0203 21:37:32.120209       6 log.go:172] (0xc002dda630) (0xc0024fe460) Stream removed, broadcasting: 1
I0203 21:37:32.120238       6 log.go:172] (0xc002dda630) Go away received
I0203 21:37:32.120411       6 log.go:172] (0xc002dda630) (0xc0024fe460) Stream removed, broadcasting: 1
I0203 21:37:32.120444       6 log.go:172] (0xc002dda630) (0xc0024fe5a0) Stream removed, broadcasting: 3
I0203 21:37:32.120464       6 log.go:172] (0xc002dda630) (0xc001c21400) Stream removed, broadcasting: 5
Feb  3 21:37:32.120: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:37:32.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7415" for this suite.

• [SLOW TEST:34.468 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2908,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:37:32.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:37:32.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8" in namespace "downward-api-8732" to be "success or failure"
Feb  3 21:37:32.266: INFO: Pod "downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.859649ms
Feb  3 21:37:34.270: INFO: Pod "downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025020115s
Feb  3 21:37:36.302: INFO: Pod "downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057278649s
STEP: Saw pod success
Feb  3 21:37:36.302: INFO: Pod "downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8" satisfied condition "success or failure"
Feb  3 21:37:36.304: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8 container client-container: 
STEP: delete the pod
Feb  3 21:37:36.342: INFO: Waiting for pod downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8 to disappear
Feb  3 21:37:36.352: INFO: Pod downwardapi-volume-140566fb-901b-40cb-9266-36aac160e8a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:37:36.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8732" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2916,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:37:36.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-ca4ea2f1-b650-4101-9099-6f8a78e28452
STEP: Creating a pod to test consume secrets
Feb  3 21:37:36.485: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae" in namespace "projected-5492" to be "success or failure"
Feb  3 21:37:36.527: INFO: Pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae": Phase="Pending", Reason="", readiness=false. Elapsed: 41.523654ms
Feb  3 21:37:38.657: INFO: Pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17220707s
Feb  3 21:37:40.660: INFO: Pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174874508s
Feb  3 21:37:42.663: INFO: Pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178180949s
STEP: Saw pod success
Feb  3 21:37:42.663: INFO: Pod "pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae" satisfied condition "success or failure"
Feb  3 21:37:42.666: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae container secret-volume-test: 
STEP: delete the pod
Feb  3 21:37:42.681: INFO: Waiting for pod pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae to disappear
Feb  3 21:37:42.684: INFO: Pod pod-projected-secrets-aa11dd8d-4ca4-48e5-82fe-eb146bb454ae no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:37:42.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5492" for this suite.

• [SLOW TEST:6.345 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2961,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:37:42.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 21:37:42.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1878'
Feb  3 21:37:45.474: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 21:37:45.474: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Feb  3 21:37:47.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1878'
Feb  3 21:37:47.746: INFO: stderr: ""
Feb  3 21:37:47.746: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:37:47.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1878" for this suite.

• [SLOW TEST:5.047 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484
    should create an rc or deployment from an image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":178,"skipped":2978,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:37:47.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:37:48.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb  3 21:37:50.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6165 create -f -'
Feb  3 21:37:53.834: INFO: stderr: ""
Feb  3 21:37:53.834: INFO: stdout: "e2e-test-crd-publish-openapi-7560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb  3 21:37:53.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6165 delete e2e-test-crd-publish-openapi-7560-crds test-cr'
Feb  3 21:37:53.930: INFO: stderr: ""
Feb  3 21:37:53.930: INFO: stdout: "e2e-test-crd-publish-openapi-7560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb  3 21:37:53.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6165 apply -f -'
Feb  3 21:37:54.191: INFO: stderr: ""
Feb  3 21:37:54.191: INFO: stdout: "e2e-test-crd-publish-openapi-7560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb  3 21:37:54.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6165 delete e2e-test-crd-publish-openapi-7560-crds test-cr'
Feb  3 21:37:54.285: INFO: stderr: ""
Feb  3 21:37:54.285: INFO: stdout: "e2e-test-crd-publish-openapi-7560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb  3 21:37:54.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7560-crds'
Feb  3 21:37:54.527: INFO: stderr: ""
Feb  3 21:37:54.527: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7560-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:37:57.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6165" for this suite.

• [SLOW TEST:9.697 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":179,"skipped":2986,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:37:57.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:38:10.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8001" for this suite.

• [SLOW TEST:13.293 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":180,"skipped":3002,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:38:10.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  3 21:38:10.811: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392407 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 21:38:10.811: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392407 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  3 21:38:20.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392446 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  3 21:38:20.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392446 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  3 21:38:30.825: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392477 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 21:38:30.825: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392477 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  3 21:38:40.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392507 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 21:38:40.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-a c25fc603-46d1-4e97-a4a6-00bc6ef5a7dd 6392507 0 2021-02-03 21:38:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  3 21:38:50.839: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-b 0bc26289-9fb0-40a1-9f79-a10b1c24f7e2 6392539 0 2021-02-03 21:38:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 21:38:50.839: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-b 0bc26289-9fb0-40a1-9f79-a10b1c24f7e2 6392539 0 2021-02-03 21:38:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  3 21:39:00.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-b 0bc26289-9fb0-40a1-9f79-a10b1c24f7e2 6392573 0 2021-02-03 21:38:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 21:39:00.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1113 /api/v1/namespaces/watch-1113/configmaps/e2e-watch-test-configmap-b 0bc26289-9fb0-40a1-9f79-a10b1c24f7e2 6392573 0 2021-02-03 21:38:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:10.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1113" for this suite.

• [SLOW TEST:60.111 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":181,"skipped":3005,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:10.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  3 21:39:15.121: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:15.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8461" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3023,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:15.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-7a3ff034-4cc2-4875-91b3-b7078cb9a0cd
STEP: Creating secret with name s-test-opt-upd-103f14dd-4bff-402a-ae76-d12fea5fc10c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7a3ff034-4cc2-4875-91b3-b7078cb9a0cd
STEP: Updating secret s-test-opt-upd-103f14dd-4bff-402a-ae76-d12fea5fc10c
STEP: Creating secret with name s-test-opt-create-e39574e2-76aa-4c58-9b32-2b63e23e3f49
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:25.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2336" for this suite.

• [SLOW TEST:10.505 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3026,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:25.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:25.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9418" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":184,"skipped":3064,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:25.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  3 21:39:25.949: INFO: Waiting up to 5m0s for pod "pod-9d25f77d-470f-47e4-987a-e49e9ab0310e" in namespace "emptydir-1402" to be "success or failure"
Feb  3 21:39:25.953: INFO: Pod "pod-9d25f77d-470f-47e4-987a-e49e9ab0310e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.502141ms
Feb  3 21:39:27.956: INFO: Pod "pod-9d25f77d-470f-47e4-987a-e49e9ab0310e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006984198s
Feb  3 21:39:29.960: INFO: Pod "pod-9d25f77d-470f-47e4-987a-e49e9ab0310e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010958416s
STEP: Saw pod success
Feb  3 21:39:29.960: INFO: Pod "pod-9d25f77d-470f-47e4-987a-e49e9ab0310e" satisfied condition "success or failure"
Feb  3 21:39:29.963: INFO: Trying to get logs from node jerma-worker pod pod-9d25f77d-470f-47e4-987a-e49e9ab0310e container test-container: 
STEP: delete the pod
Feb  3 21:39:29.995: INFO: Waiting for pod pod-9d25f77d-470f-47e4-987a-e49e9ab0310e to disappear
Feb  3 21:39:30.013: INFO: Pod pod-9d25f77d-470f-47e4-987a-e49e9ab0310e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:30.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1402" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3068,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:30.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-faa96cca-f827-41f2-bcb8-2151318cb2af
STEP: Creating a pod to test consume secrets
Feb  3 21:39:30.111: INFO: Waiting up to 5m0s for pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680" in namespace "secrets-6054" to be "success or failure"
Feb  3 21:39:30.134: INFO: Pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680": Phase="Pending", Reason="", readiness=false. Elapsed: 22.55989ms
Feb  3 21:39:32.347: INFO: Pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235644192s
Feb  3 21:39:34.539: INFO: Pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680": Phase="Running", Reason="", readiness=true. Elapsed: 4.427339923s
Feb  3 21:39:36.542: INFO: Pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430795446s
STEP: Saw pod success
Feb  3 21:39:36.542: INFO: Pod "pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680" satisfied condition "success or failure"
Feb  3 21:39:36.545: INFO: Trying to get logs from node jerma-worker pod pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680 container secret-volume-test: 
STEP: delete the pod
Feb  3 21:39:36.574: INFO: Waiting for pod pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680 to disappear
Feb  3 21:39:36.579: INFO: Pod pod-secrets-8ed709e4-b1dc-4391-8997-0935c494d680 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6054" for this suite.

• [SLOW TEST:6.565 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:36.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-d3baac28-30e8-4b67-adc2-d8afca263742
STEP: Creating a pod to test consume configMaps
Feb  3 21:39:36.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d" in namespace "projected-5942" to be "success or failure"
Feb  3 21:39:36.703: INFO: Pod "pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.822321ms
Feb  3 21:39:38.708: INFO: Pod "pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030222228s
Feb  3 21:39:40.711: INFO: Pod "pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03395934s
STEP: Saw pod success
Feb  3 21:39:40.711: INFO: Pod "pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d" satisfied condition "success or failure"
Feb  3 21:39:40.714: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 21:39:40.748: INFO: Waiting for pod pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d to disappear
Feb  3 21:39:40.766: INFO: Pod pod-projected-configmaps-76a51502-6d0a-4ce2-8c94-13a8e395f86d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:40.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5942" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3098,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:40.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:39:40.880: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:46.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1729" for this suite.

• [SLOW TEST:5.576 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":188,"skipped":3183,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:46.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb  3 21:39:46.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5180'
Feb  3 21:39:46.833: INFO: stderr: ""
Feb  3 21:39:46.833: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  3 21:39:47.837: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:47.837: INFO: Found 0 / 1
Feb  3 21:39:48.837: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:48.837: INFO: Found 0 / 1
Feb  3 21:39:49.838: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:49.838: INFO: Found 0 / 1
Feb  3 21:39:50.838: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:50.838: INFO: Found 1 / 1
Feb  3 21:39:50.838: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  3 21:39:50.841: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:50.841: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 21:39:50.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-mqvpm --namespace=kubectl-5180 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  3 21:39:50.951: INFO: stderr: ""
Feb  3 21:39:50.951: INFO: stdout: "pod/agnhost-master-mqvpm patched\n"
STEP: checking annotations
Feb  3 21:39:50.954: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 21:39:50.954: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5180" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":189,"skipped":3197,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:50.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-6ea47c95-b29f-4a0d-989f-d0495e58dba5
STEP: Creating secret with name s-test-opt-upd-0880d406-9ee3-4cb7-85b0-1d8cbebd961b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-6ea47c95-b29f-4a0d-989f-d0495e58dba5
STEP: Updating secret s-test-opt-upd-0880d406-9ee3-4cb7-85b0-1d8cbebd961b
STEP: Creating secret with name s-test-opt-create-9bbaca09-0bca-4aae-a438-29a454339a4e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:39:59.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-334" for this suite.

• [SLOW TEST:8.424 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3262,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:39:59.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  3 21:40:03.484: INFO: &Pod{ObjectMeta:{send-events-a51f1d0f-7c28-4e88-890b-7843e363a634  events-7538 /api/v1/namespaces/events-7538/pods/send-events-a51f1d0f-7c28-4e88-890b-7843e363a634 adde8204-d2ad-4475-aafe-449a50e2a34d 6393046 0 2021-02-03 21:39:59 +0000 UTC   map[name:foo time:434925618] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fxpk6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fxpk6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fxpk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:39:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:40:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:40:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:39:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.70,StartTime:2021-02-03 21:39:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 21:40:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9166df91ee6ac3cb2d7ae8483e5e5271653252baaec89d960fec4b4f5579c6dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb  3 21:40:05.493: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  3 21:40:07.498: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:40:07.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7538" for this suite.

• [SLOW TEST:8.185 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":191,"skipped":3291,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:40:07.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Feb  3 21:40:07.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  3 21:40:07.824: INFO: stderr: ""
Feb  3 21:40:07.824: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:40:07.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2331" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":192,"skipped":3301,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:40:07.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4431
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4431
STEP: creating replication controller externalsvc in namespace services-4431
I0203 21:40:08.145823       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4431, replica count: 2
I0203 21:40:11.196192       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:40:14.196381       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb  3 21:40:14.243: INFO: Creating new exec pod
Feb  3 21:40:18.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4431 execpodz9r7r -- /bin/sh -x -c nslookup nodeport-service'
Feb  3 21:40:18.493: INFO: stderr: "I0203 21:40:18.404499    2915 log.go:172] (0xc000a60a50) (0xc0006c3ea0) Create stream\nI0203 21:40:18.404566    2915 log.go:172] (0xc000a60a50) (0xc0006c3ea0) Stream added, broadcasting: 1\nI0203 21:40:18.412266    2915 log.go:172] (0xc000a60a50) Reply frame received for 1\nI0203 21:40:18.412321    2915 log.go:172] (0xc000a60a50) (0xc0005e8780) Create stream\nI0203 21:40:18.412335    2915 log.go:172] (0xc000a60a50) (0xc0005e8780) Stream added, broadcasting: 3\nI0203 21:40:18.413320    2915 log.go:172] (0xc000a60a50) Reply frame received for 3\nI0203 21:40:18.413363    2915 log.go:172] (0xc000a60a50) (0xc0006c3f40) Create stream\nI0203 21:40:18.413376    2915 log.go:172] (0xc000a60a50) (0xc0006c3f40) Stream added, broadcasting: 5\nI0203 21:40:18.414116    2915 log.go:172] (0xc000a60a50) Reply frame received for 5\nI0203 21:40:18.474687    2915 log.go:172] (0xc000a60a50) Data frame received for 5\nI0203 21:40:18.474716    2915 log.go:172] (0xc0006c3f40) (5) Data frame handling\nI0203 21:40:18.474733    2915 log.go:172] (0xc0006c3f40) (5) Data frame sent\n+ nslookup nodeport-service\nI0203 21:40:18.484236    2915 log.go:172] (0xc000a60a50) Data frame received for 3\nI0203 21:40:18.484280    2915 log.go:172] (0xc0005e8780) (3) Data frame handling\nI0203 21:40:18.484311    2915 log.go:172] (0xc0005e8780) (3) Data frame sent\nI0203 21:40:18.485153    2915 log.go:172] (0xc000a60a50) Data frame received for 3\nI0203 21:40:18.485171    2915 log.go:172] (0xc0005e8780) (3) Data frame handling\nI0203 21:40:18.485179    2915 log.go:172] (0xc0005e8780) (3) Data frame sent\nI0203 21:40:18.485425    2915 log.go:172] (0xc000a60a50) Data frame received for 3\nI0203 21:40:18.485446    2915 log.go:172] (0xc0005e8780) (3) Data frame handling\nI0203 21:40:18.485588    2915 log.go:172] (0xc000a60a50) Data frame received for 5\nI0203 21:40:18.485604    2915 log.go:172] (0xc0006c3f40) (5) Data frame handling\nI0203 21:40:18.487639    2915 log.go:172] (0xc000a60a50) Data frame received for 1\nI0203 21:40:18.487662    2915 log.go:172] (0xc0006c3ea0) (1) Data frame handling\nI0203 21:40:18.487674    2915 log.go:172] (0xc0006c3ea0) (1) Data frame sent\nI0203 21:40:18.487697    2915 log.go:172] (0xc000a60a50) (0xc0006c3ea0) Stream removed, broadcasting: 1\nI0203 21:40:18.487853    2915 log.go:172] (0xc000a60a50) Go away received\nI0203 21:40:18.488054    2915 log.go:172] (0xc000a60a50) (0xc0006c3ea0) Stream removed, broadcasting: 1\nI0203 21:40:18.488071    2915 log.go:172] (0xc000a60a50) (0xc0005e8780) Stream removed, broadcasting: 3\nI0203 21:40:18.488080    2915 log.go:172] (0xc000a60a50) (0xc0006c3f40) Stream removed, broadcasting: 5\n"
Feb  3 21:40:18.493: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4431.svc.cluster.local\tcanonical name = externalsvc.services-4431.svc.cluster.local.\nName:\texternalsvc.services-4431.svc.cluster.local\nAddress: 10.96.233.171\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4431, will wait for the garbage collector to delete the pods
Feb  3 21:40:18.618: INFO: Deleting ReplicationController externalsvc took: 70.903185ms
Feb  3 21:40:19.018: INFO: Terminating ReplicationController externalsvc pods took: 400.298805ms
Feb  3 21:40:32.134: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:40:32.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4431" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.325 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":193,"skipped":3304,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:40:32.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:40:38.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3549" for this suite.
STEP: Destroying namespace "nsdeletetest-8824" for this suite.
Feb  3 21:40:38.899: INFO: Namespace nsdeletetest-8824 was already deleted
STEP: Destroying namespace "nsdeletetest-2320" for this suite.

• [SLOW TEST:6.745 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":194,"skipped":3309,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:40:38.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 21:40:39.064: INFO: Waiting up to 5m0s for pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07" in namespace "emptydir-1177" to be "success or failure"
Feb  3 21:40:39.198: INFO: Pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07": Phase="Pending", Reason="", readiness=false. Elapsed: 134.018544ms
Feb  3 21:40:41.202: INFO: Pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138090066s
Feb  3 21:40:43.207: INFO: Pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07": Phase="Running", Reason="", readiness=true. Elapsed: 4.142654938s
Feb  3 21:40:45.211: INFO: Pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146463924s
STEP: Saw pod success
Feb  3 21:40:45.211: INFO: Pod "pod-50e0ba4a-f144-4597-b2dd-8b1786752b07" satisfied condition "success or failure"
Feb  3 21:40:45.214: INFO: Trying to get logs from node jerma-worker pod pod-50e0ba4a-f144-4597-b2dd-8b1786752b07 container test-container: 
STEP: delete the pod
Feb  3 21:40:45.232: INFO: Waiting for pod pod-50e0ba4a-f144-4597-b2dd-8b1786752b07 to disappear
Feb  3 21:40:45.236: INFO: Pod pod-50e0ba4a-f144-4597-b2dd-8b1786752b07 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:40:45.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1177" for this suite.

• [SLOW TEST:6.348 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3326,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:40:45.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5683
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 21:40:45.331: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 21:41:09.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.74:8080/dial?request=hostname&protocol=udp&host=10.244.2.73&port=8081&tries=1'] Namespace:pod-network-test-5683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:41:09.448: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:41:09.489090       6 log.go:172] (0xc002a4efd0) (0xc0020bc320) Create stream
I0203 21:41:09.489116       6 log.go:172] (0xc002a4efd0) (0xc0020bc320) Stream added, broadcasting: 1
I0203 21:41:09.491384       6 log.go:172] (0xc002a4efd0) Reply frame received for 1
I0203 21:41:09.491423       6 log.go:172] (0xc002a4efd0) (0xc000f26780) Create stream
I0203 21:41:09.491437       6 log.go:172] (0xc002a4efd0) (0xc000f26780) Stream added, broadcasting: 3
I0203 21:41:09.492534       6 log.go:172] (0xc002a4efd0) Reply frame received for 3
I0203 21:41:09.492614       6 log.go:172] (0xc002a4efd0) (0xc002a74780) Create stream
I0203 21:41:09.492631       6 log.go:172] (0xc002a4efd0) (0xc002a74780) Stream added, broadcasting: 5
I0203 21:41:09.493686       6 log.go:172] (0xc002a4efd0) Reply frame received for 5
I0203 21:41:09.566778       6 log.go:172] (0xc002a4efd0) Data frame received for 3
I0203 21:41:09.566811       6 log.go:172] (0xc000f26780) (3) Data frame handling
I0203 21:41:09.566830       6 log.go:172] (0xc000f26780) (3) Data frame sent
I0203 21:41:09.567127       6 log.go:172] (0xc002a4efd0) Data frame received for 3
I0203 21:41:09.567153       6 log.go:172] (0xc000f26780) (3) Data frame handling
I0203 21:41:09.567256       6 log.go:172] (0xc002a4efd0) Data frame received for 5
I0203 21:41:09.567271       6 log.go:172] (0xc002a74780) (5) Data frame handling
I0203 21:41:09.568665       6 log.go:172] (0xc002a4efd0) Data frame received for 1
I0203 21:41:09.568681       6 log.go:172] (0xc0020bc320) (1) Data frame handling
I0203 21:41:09.568694       6 log.go:172] (0xc0020bc320) (1) Data frame sent
I0203 21:41:09.568702       6 log.go:172] (0xc002a4efd0) (0xc0020bc320) Stream removed, broadcasting: 1
I0203 21:41:09.568715       6 log.go:172] (0xc002a4efd0) Go away received
I0203 21:41:09.569017       6 log.go:172] (0xc002a4efd0) (0xc0020bc320) Stream removed, broadcasting: 1
I0203 21:41:09.569051       6 log.go:172] (0xc002a4efd0) (0xc000f26780) Stream removed, broadcasting: 3
I0203 21:41:09.569071       6 log.go:172] (0xc002a4efd0) (0xc002a74780) Stream removed, broadcasting: 5
Feb  3 21:41:09.569: INFO: Waiting for responses: map[]
Feb  3 21:41:09.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.74:8080/dial?request=hostname&protocol=udp&host=10.244.1.73&port=8081&tries=1'] Namespace:pod-network-test-5683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:41:09.572: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:41:09.604577       6 log.go:172] (0xc001844b00) (0xc002a74b40) Create stream
I0203 21:41:09.604601       6 log.go:172] (0xc001844b00) (0xc002a74b40) Stream added, broadcasting: 1
I0203 21:41:09.608293       6 log.go:172] (0xc001844b00) Reply frame received for 1
I0203 21:41:09.608353       6 log.go:172] (0xc001844b00) (0xc002a74be0) Create stream
I0203 21:41:09.608379       6 log.go:172] (0xc001844b00) (0xc002a74be0) Stream added, broadcasting: 3
I0203 21:41:09.609754       6 log.go:172] (0xc001844b00) Reply frame received for 3
I0203 21:41:09.609797       6 log.go:172] (0xc001844b00) (0xc0020bc3c0) Create stream
I0203 21:41:09.609811       6 log.go:172] (0xc001844b00) (0xc0020bc3c0) Stream added, broadcasting: 5
I0203 21:41:09.610856       6 log.go:172] (0xc001844b00) Reply frame received for 5
I0203 21:41:09.677600       6 log.go:172] (0xc001844b00) Data frame received for 3
I0203 21:41:09.677649       6 log.go:172] (0xc002a74be0) (3) Data frame handling
I0203 21:41:09.677689       6 log.go:172] (0xc002a74be0) (3) Data frame sent
I0203 21:41:09.677718       6 log.go:172] (0xc001844b00) Data frame received for 3
I0203 21:41:09.677738       6 log.go:172] (0xc002a74be0) (3) Data frame handling
I0203 21:41:09.678115       6 log.go:172] (0xc001844b00) Data frame received for 5
I0203 21:41:09.678139       6 log.go:172] (0xc0020bc3c0) (5) Data frame handling
I0203 21:41:09.679259       6 log.go:172] (0xc001844b00) Data frame received for 1
I0203 21:41:09.679276       6 log.go:172] (0xc002a74b40) (1) Data frame handling
I0203 21:41:09.679307       6 log.go:172] (0xc002a74b40) (1) Data frame sent
I0203 21:41:09.679320       6 log.go:172] (0xc001844b00) (0xc002a74b40) Stream removed, broadcasting: 1
I0203 21:41:09.679344       6 log.go:172] (0xc001844b00) Go away received
I0203 21:41:09.679442       6 log.go:172] (0xc001844b00) (0xc002a74b40) Stream removed, broadcasting: 1
I0203 21:41:09.679471       6 log.go:172] (0xc001844b00) (0xc002a74be0) Stream removed, broadcasting: 3
I0203 21:41:09.679492       6 log.go:172] (0xc001844b00) (0xc0020bc3c0) Stream removed, broadcasting: 5
Feb  3 21:41:09.679: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:09.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5683" for this suite.

• [SLOW TEST:24.436 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3342,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:09.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb  3 21:41:09.733: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:09.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8462" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":197,"skipped":3454,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:09.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:41:09.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c" in namespace "projected-9448" to be "success or failure"
Feb  3 21:41:10.010: INFO: Pod "downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.939488ms
Feb  3 21:41:12.043: INFO: Pod "downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059171522s
Feb  3 21:41:14.047: INFO: Pod "downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063065862s
STEP: Saw pod success
Feb  3 21:41:14.047: INFO: Pod "downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c" satisfied condition "success or failure"
Feb  3 21:41:14.050: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c container client-container: 
STEP: delete the pod
Feb  3 21:41:14.093: INFO: Waiting for pod downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c to disappear
Feb  3 21:41:14.105: INFO: Pod downwardapi-volume-dca961c7-1834-4cf9-8700-9c1a22f8960c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:14.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9448" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3455,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:14.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0203 21:41:24.210041       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 21:41:24.210: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:24.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5959" for this suite.

• [SLOW TEST:10.103 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":199,"skipped":3483,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:24.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Feb  3 21:41:24.304: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix948738929/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:24.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3171" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":200,"skipped":3488,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:24.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-89fa7881-0acb-48c1-b9b3-1fdf1ebb0424
STEP: Creating a pod to test consume configMaps
Feb  3 21:41:24.483: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def" in namespace "projected-2944" to be "success or failure"
Feb  3 21:41:24.496: INFO: Pod "pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def": Phase="Pending", Reason="", readiness=false. Elapsed: 12.252161ms
Feb  3 21:41:26.576: INFO: Pod "pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092769461s
Feb  3 21:41:28.580: INFO: Pod "pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096278891s
STEP: Saw pod success
Feb  3 21:41:28.580: INFO: Pod "pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def" satisfied condition "success or failure"
Feb  3 21:41:28.583: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 21:41:28.620: INFO: Waiting for pod pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def to disappear
Feb  3 21:41:28.631: INFO: Pod pod-projected-configmaps-05184f37-6bc9-45f8-8cbb-aa69832a6def no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:28.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2944" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3502,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:28.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb  3 21:41:28.865: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb  3 21:41:28.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:29.241: INFO: stderr: ""
Feb  3 21:41:29.241: INFO: stdout: "service/agnhost-slave created\n"
Feb  3 21:41:29.241: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb  3 21:41:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:29.618: INFO: stderr: ""
Feb  3 21:41:29.618: INFO: stdout: "service/agnhost-master created\n"
Feb  3 21:41:29.618: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  3 21:41:29.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:29.969: INFO: stderr: ""
Feb  3 21:41:29.969: INFO: stdout: "service/frontend created\n"
Feb  3 21:41:29.969: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb  3 21:41:29.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:30.215: INFO: stderr: ""
Feb  3 21:41:30.215: INFO: stdout: "deployment.apps/frontend created\n"
Feb  3 21:41:30.215: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 21:41:30.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:30.525: INFO: stderr: ""
Feb  3 21:41:30.525: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb  3 21:41:30.525: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 21:41:30.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5962'
Feb  3 21:41:30.826: INFO: stderr: ""
Feb  3 21:41:30.826: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb  3 21:41:30.826: INFO: Waiting for all frontend pods to be Running.
Feb  3 21:41:40.877: INFO: Waiting for frontend to serve content.
Feb  3 21:41:40.887: INFO: Trying to add a new entry to the guestbook.
Feb  3 21:41:40.897: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  3 21:41:40.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.045: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.045: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 21:41:41.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.228: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.228: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 21:41:41.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.368: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.368: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 21:41:41.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.501: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 21:41:41.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.633: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.633: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 21:41:41.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5962'
Feb  3 21:41:41.774: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 21:41:41.774: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:41:41.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5962" for this suite.

• [SLOW TEST:13.142 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":202,"skipped":3505,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:41:41.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-8h9v
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 21:41:41.984: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8h9v" in namespace "subpath-5892" to be "success or failure"
Feb  3 21:41:42.451: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Pending", Reason="", readiness=false. Elapsed: 467.294453ms
Feb  3 21:41:44.508: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523944964s
Feb  3 21:41:46.542: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.557766344s
Feb  3 21:41:48.546: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 6.562142823s
Feb  3 21:41:50.550: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 8.566062386s
Feb  3 21:41:52.553: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 10.569117952s
Feb  3 21:41:54.557: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 12.573032673s
Feb  3 21:41:56.561: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 14.577013665s
Feb  3 21:41:58.565: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 16.581081214s
Feb  3 21:42:00.570: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 18.585797952s
Feb  3 21:42:02.574: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 20.589876995s
Feb  3 21:42:04.578: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 22.593861292s
Feb  3 21:42:06.582: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Running", Reason="", readiness=true. Elapsed: 24.598240626s
Feb  3 21:42:08.587: INFO: Pod "pod-subpath-test-projected-8h9v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.602897498s
STEP: Saw pod success
Feb  3 21:42:08.587: INFO: Pod "pod-subpath-test-projected-8h9v" satisfied condition "success or failure"
Feb  3 21:42:08.590: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-8h9v container test-container-subpath-projected-8h9v: 
STEP: delete the pod
Feb  3 21:42:08.610: INFO: Waiting for pod pod-subpath-test-projected-8h9v to disappear
Feb  3 21:42:08.630: INFO: Pod pod-subpath-test-projected-8h9v no longer exists
STEP: Deleting pod pod-subpath-test-projected-8h9v
Feb  3 21:42:08.630: INFO: Deleting pod "pod-subpath-test-projected-8h9v" in namespace "subpath-5892"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:42:08.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5892" for this suite.

• [SLOW TEST:26.858 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":203,"skipped":3529,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:42:08.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:42:08.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5039" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":204,"skipped":3534,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:42:08.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-e491b405-5767-4d56-96d8-6db6db736246
STEP: Creating a pod to test consume secrets
Feb  3 21:42:09.164: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66" in namespace "projected-9555" to be "success or failure"
Feb  3 21:42:09.168: INFO: Pod "pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142823ms
Feb  3 21:42:11.247: INFO: Pod "pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08341388s
Feb  3 21:42:13.251: INFO: Pod "pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087340618s
STEP: Saw pod success
Feb  3 21:42:13.251: INFO: Pod "pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66" satisfied condition "success or failure"
Feb  3 21:42:13.254: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 21:42:13.338: INFO: Waiting for pod pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66 to disappear
Feb  3 21:42:13.352: INFO: Pod pod-projected-secrets-1f73e5d2-df0f-45b6-9b44-dd9f11850d66 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:42:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9555" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3558,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:42:13.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Feb  3 21:42:14.011: INFO: created pod pod-service-account-defaultsa
Feb  3 21:42:14.011: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  3 21:42:14.029: INFO: created pod pod-service-account-mountsa
Feb  3 21:42:14.029: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  3 21:42:14.068: INFO: created pod pod-service-account-nomountsa
Feb  3 21:42:14.068: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  3 21:42:14.081: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  3 21:42:14.081: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  3 21:42:14.104: INFO: created pod pod-service-account-mountsa-mountspec
Feb  3 21:42:14.104: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  3 21:42:14.138: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  3 21:42:14.138: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  3 21:42:14.156: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  3 21:42:14.156: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  3 21:42:14.212: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  3 21:42:14.212: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  3 21:42:14.235: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  3 21:42:14.235: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:42:14.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6550" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":206,"skipped":3562,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:42:14.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-7023
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 21:42:14.495: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 21:42:50.688: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.84:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7023 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:42:50.688: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:42:50.727429       6 log.go:172] (0xc002dda6e0) (0xc0016c7a40) Create stream
I0203 21:42:50.727459       6 log.go:172] (0xc002dda6e0) (0xc0016c7a40) Stream added, broadcasting: 1
I0203 21:42:50.729208       6 log.go:172] (0xc002dda6e0) Reply frame received for 1
I0203 21:42:50.729234       6 log.go:172] (0xc002dda6e0) (0xc00201a5a0) Create stream
I0203 21:42:50.729244       6 log.go:172] (0xc002dda6e0) (0xc00201a5a0) Stream added, broadcasting: 3
I0203 21:42:50.730286       6 log.go:172] (0xc002dda6e0) Reply frame received for 3
I0203 21:42:50.730305       6 log.go:172] (0xc002dda6e0) (0xc0016c7b80) Create stream
I0203 21:42:50.730321       6 log.go:172] (0xc002dda6e0) (0xc0016c7b80) Stream added, broadcasting: 5
I0203 21:42:50.731185       6 log.go:172] (0xc002dda6e0) Reply frame received for 5
I0203 21:42:50.844757       6 log.go:172] (0xc002dda6e0) Data frame received for 3
I0203 21:42:50.844805       6 log.go:172] (0xc00201a5a0) (3) Data frame handling
I0203 21:42:50.844828       6 log.go:172] (0xc00201a5a0) (3) Data frame sent
I0203 21:42:50.845135       6 log.go:172] (0xc002dda6e0) Data frame received for 3
I0203 21:42:50.845155       6 log.go:172] (0xc00201a5a0) (3) Data frame handling
I0203 21:42:50.845394       6 log.go:172] (0xc002dda6e0) Data frame received for 5
I0203 21:42:50.845425       6 log.go:172] (0xc0016c7b80) (5) Data frame handling
I0203 21:42:50.847027       6 log.go:172] (0xc002dda6e0) Data frame received for 1
I0203 21:42:50.847059       6 log.go:172] (0xc0016c7a40) (1) Data frame handling
I0203 21:42:50.847073       6 log.go:172] (0xc0016c7a40) (1) Data frame sent
I0203 21:42:50.847085       6 log.go:172] (0xc002dda6e0) (0xc0016c7a40) Stream removed, broadcasting: 1
I0203 21:42:50.847109       6 log.go:172] (0xc002dda6e0) Go away received
I0203 21:42:50.847239       6 log.go:172] (0xc002dda6e0) (0xc0016c7a40) Stream removed, broadcasting: 1
I0203 21:42:50.847267       6 log.go:172] (0xc002dda6e0) (0xc00201a5a0) Stream removed, broadcasting: 3
I0203 21:42:50.847285       6 log.go:172] (0xc002dda6e0) (0xc0016c7b80) Stream removed, broadcasting: 5
Feb  3 21:42:50.847: INFO: Found all expected endpoints: [netserver-0]
Feb  3 21:42:50.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.86:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7023 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:42:50.850: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:42:50.871562       6 log.go:172] (0xc001a1c420) (0xc00201b040) Create stream
I0203 21:42:50.871588       6 log.go:172] (0xc001a1c420) (0xc00201b040) Stream added, broadcasting: 1
I0203 21:42:50.873543       6 log.go:172] (0xc001a1c420) Reply frame received for 1
I0203 21:42:50.873623       6 log.go:172] (0xc001a1c420) (0xc001ff1180) Create stream
I0203 21:42:50.873653       6 log.go:172] (0xc001a1c420) (0xc001ff1180) Stream added, broadcasting: 3
I0203 21:42:50.874700       6 log.go:172] (0xc001a1c420) Reply frame received for 3
I0203 21:42:50.874754       6 log.go:172] (0xc001a1c420) (0xc00249eaa0) Create stream
I0203 21:42:50.874771       6 log.go:172] (0xc001a1c420) (0xc00249eaa0) Stream added, broadcasting: 5
I0203 21:42:50.875768       6 log.go:172] (0xc001a1c420) Reply frame received for 5
I0203 21:42:50.941588       6 log.go:172] (0xc001a1c420) Data frame received for 3
I0203 21:42:50.941622       6 log.go:172] (0xc001ff1180) (3) Data frame handling
I0203 21:42:50.941643       6 log.go:172] (0xc001ff1180) (3) Data frame sent
I0203 21:42:50.941794       6 log.go:172] (0xc001a1c420) Data frame received for 5
I0203 21:42:50.941820       6 log.go:172] (0xc00249eaa0) (5) Data frame handling
I0203 21:42:50.941837       6 log.go:172] (0xc001a1c420) Data frame received for 3
I0203 21:42:50.941843       6 log.go:172] (0xc001ff1180) (3) Data frame handling
I0203 21:42:50.943341       6 log.go:172] (0xc001a1c420) Data frame received for 1
I0203 21:42:50.943375       6 log.go:172] (0xc00201b040) (1) Data frame handling
I0203 21:42:50.943394       6 log.go:172] (0xc00201b040) (1) Data frame sent
I0203 21:42:50.943410       6 log.go:172] (0xc001a1c420) (0xc00201b040) Stream removed, broadcasting: 1
I0203 21:42:50.943453       6 log.go:172] (0xc001a1c420) Go away received
I0203 21:42:50.943571       6 log.go:172] (0xc001a1c420) (0xc00201b040) Stream removed, broadcasting: 1
I0203 21:42:50.943603       6 log.go:172] (0xc001a1c420) (0xc001ff1180) Stream removed, broadcasting: 3
I0203 21:42:50.943616       6 log.go:172] (0xc001a1c420) (0xc00249eaa0) Stream removed, broadcasting: 5
Feb  3 21:42:50.943: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:42:50.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7023" for this suite.

• [SLOW TEST:36.605 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3572,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:42:50.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:42:59.165: INFO: DNS probes using dns-test-e526f541-3244-45cf-a930-7c6452a342fc succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:43:05.295: INFO: File wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:05.297: INFO: File jessie_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:05.297: INFO: Lookups using dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f failed for: [wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local jessie_udp@dns-test-service-3.dns-534.svc.cluster.local]

Feb  3 21:43:10.303: INFO: File wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:10.307: INFO: File jessie_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:10.307: INFO: Lookups using dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f failed for: [wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local jessie_udp@dns-test-service-3.dns-534.svc.cluster.local]

Feb  3 21:43:15.302: INFO: File wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:15.305: INFO: File jessie_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:15.305: INFO: Lookups using dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f failed for: [wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local jessie_udp@dns-test-service-3.dns-534.svc.cluster.local]

Feb  3 21:43:20.303: INFO: File wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:20.306: INFO: File jessie_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:20.307: INFO: Lookups using dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f failed for: [wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local jessie_udp@dns-test-service-3.dns-534.svc.cluster.local]

Feb  3 21:43:25.302: INFO: File wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:25.305: INFO: File jessie_udp@dns-test-service-3.dns-534.svc.cluster.local from pod  dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 21:43:25.305: INFO: Lookups using dns-534/dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f failed for: [wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local jessie_udp@dns-test-service-3.dns-534.svc.cluster.local]

Feb  3 21:43:30.305: INFO: DNS probes using dns-test-a18616a3-c6a1-48f2-85fa-d5018d3f613f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-534.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-534.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 21:43:38.989: INFO: DNS probes using dns-test-56858ab7-a7d5-4cdf-8fd3-f3f0daff9989 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:43:39.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-534" for this suite.

• [SLOW TEST:48.112 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":208,"skipped":3583,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:43:39.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:43:39.161: INFO: Creating deployment "test-recreate-deployment"
Feb  3 21:43:39.164: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  3 21:43:39.457: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  3 21:43:41.464: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  3 21:43:41.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985419, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985419, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985419, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985419, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:43:43.471: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  3 21:43:43.478: INFO: Updating deployment test-recreate-deployment
Feb  3 21:43:43.478: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  3 21:43:43.921: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1227 /apis/apps/v1/namespaces/deployment-1227/deployments/test-recreate-deployment c065ed3d-b53c-4477-bb12-facc6d792469 6394585 2 2021-02-03 21:43:39 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00447c578  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-03 21:43:43 +0000 UTC,LastTransitionTime:2021-02-03 21:43:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2021-02-03 21:43:43 +0000 UTC,LastTransitionTime:2021-02-03 21:43:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb  3 21:43:43.925: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-1227 /apis/apps/v1/namespaces/deployment-1227/replicasets/test-recreate-deployment-5f94c574ff bf452d94-5d91-45d8-8f56-ecd3c4f4a476 6394583 1 2021-02-03 21:43:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c065ed3d-b53c-4477-bb12-facc6d792469 0xc00447c927 0xc00447c928}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00447c988  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  3 21:43:43.925: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  3 21:43:43.925: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-1227 /apis/apps/v1/namespaces/deployment-1227/replicasets/test-recreate-deployment-799c574856 7eb20ef9-cf69-4400-9cf8-ed1249d7bac2 6394573 2 2021-02-03 21:43:39 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c065ed3d-b53c-4477-bb12-facc6d792469 0xc00447c9f7 0xc00447c9f8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00447ca68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  3 21:43:43.928: INFO: Pod "test-recreate-deployment-5f94c574ff-pj6m5" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pj6m5 test-recreate-deployment-5f94c574ff- deployment-1227 /api/v1/namespaces/deployment-1227/pods/test-recreate-deployment-5f94c574ff-pj6m5 f94a1493-fdd3-4abe-9781-5f305ea93244 6394584 0 2021-02-03 21:43:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff bf452d94-5d91-45d8-8f56-ecd3c4f4a476 0xc00447cf07 0xc00447cf08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sp68s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sp68s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sp68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:43:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:43:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 21:43:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2021-02-03 21:43:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:43:43.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1227" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":209,"skipped":3619,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:43:43.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1043/configmap-test-2ab4d7c0-b2d1-414d-abff-66380a186f12
STEP: Creating a pod to test consume configMaps
Feb  3 21:43:44.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b" in namespace "configmap-1043" to be "success or failure"
Feb  3 21:43:44.265: INFO: Pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.012778ms
Feb  3 21:43:46.740: INFO: Pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500561706s
Feb  3 21:43:48.743: INFO: Pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b": Phase="Running", Reason="", readiness=true. Elapsed: 4.503853784s
Feb  3 21:43:50.747: INFO: Pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.507185726s
STEP: Saw pod success
Feb  3 21:43:50.747: INFO: Pod "pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b" satisfied condition "success or failure"
Feb  3 21:43:50.750: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b container env-test: 
STEP: delete the pod
Feb  3 21:43:50.781: INFO: Waiting for pod pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b to disappear
Feb  3 21:43:50.786: INFO: Pod pod-configmaps-bdfe0770-f2e0-44ea-a641-069a7f4e599b no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:43:50.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1043" for this suite.

• [SLOW TEST:6.858 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3626,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:43:50.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 21:43:50.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5312'
Feb  3 21:43:51.033: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 21:43:51.033: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Feb  3 21:43:51.069: INFO: scanned /root for discovery docs: 
Feb  3 21:43:51.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5312'
Feb  3 21:44:06.957: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  3 21:44:06.958: INFO: stdout: "Created e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77\nScaling up e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb  3 21:44:06.958: INFO: stdout: "Created e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77\nScaling up e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb  3 21:44:06.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5312'
Feb  3 21:44:07.077: INFO: stderr: ""
Feb  3 21:44:07.077: INFO: stdout: "e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77-k6mcb "
Feb  3 21:44:07.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77-k6mcb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5312'
Feb  3 21:44:07.179: INFO: stderr: ""
Feb  3 21:44:07.179: INFO: stdout: "true"
Feb  3 21:44:07.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77-k6mcb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5312'
Feb  3 21:44:07.263: INFO: stderr: ""
Feb  3 21:44:07.263: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb  3 21:44:07.263: INFO: e2e-test-httpd-rc-4241a36dc6f3634bc391854305b49c77-k6mcb is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Feb  3 21:44:07.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5312'
Feb  3 21:44:07.416: INFO: stderr: ""
Feb  3 21:44:07.416: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:44:07.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5312" for this suite.

• [SLOW TEST:16.730 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":211,"skipped":3637,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:44:07.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:44:07.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:44:11.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5617" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3647,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:44:11.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:185
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:44:11.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2254" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":213,"skipped":3676,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:44:11.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9650
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  3 21:44:11.954: INFO: Found 0 stateful pods, waiting for 3
Feb  3 21:44:21.963: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:44:21.963: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:44:21.963: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 21:44:31.958: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:44:31.958: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:44:31.958: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 21:44:31.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9650 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:44:32.192: INFO: stderr: "I0203 21:44:32.082960    3355 log.go:172] (0xc000a26b00) (0xc000754280) Create stream\nI0203 21:44:32.083002    3355 log.go:172] (0xc000a26b00) (0xc000754280) Stream added, broadcasting: 1\nI0203 21:44:32.085497    3355 log.go:172] (0xc000a26b00) Reply frame received for 1\nI0203 21:44:32.085541    3355 log.go:172] (0xc000a26b00) (0xc0007f8140) Create stream\nI0203 21:44:32.085552    3355 log.go:172] (0xc000a26b00) (0xc0007f8140) Stream added, broadcasting: 3\nI0203 21:44:32.086382    3355 log.go:172] (0xc000a26b00) Reply frame received for 3\nI0203 21:44:32.086425    3355 log.go:172] (0xc000a26b00) (0xc000860000) Create stream\nI0203 21:44:32.086441    3355 log.go:172] (0xc000a26b00) (0xc000860000) Stream added, broadcasting: 5\nI0203 21:44:32.087088    3355 log.go:172] (0xc000a26b00) Reply frame received for 5\nI0203 21:44:32.138134    3355 log.go:172] (0xc000a26b00) Data frame received for 5\nI0203 21:44:32.138156    3355 log.go:172] (0xc000860000) (5) Data frame handling\nI0203 21:44:32.138170    3355 log.go:172] (0xc000860000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:44:32.183352    3355 log.go:172] (0xc000a26b00) Data frame received for 3\nI0203 21:44:32.183391    3355 log.go:172] (0xc0007f8140) (3) Data frame handling\nI0203 21:44:32.183423    3355 log.go:172] (0xc0007f8140) (3) Data frame sent\nI0203 21:44:32.183726    3355 log.go:172] (0xc000a26b00) Data frame received for 5\nI0203 21:44:32.183762    3355 log.go:172] (0xc000a26b00) Data frame received for 3\nI0203 21:44:32.183802    3355 log.go:172] (0xc0007f8140) (3) Data frame handling\nI0203 21:44:32.183837    3355 log.go:172] (0xc000860000) (5) Data frame handling\nI0203 21:44:32.185937    3355 log.go:172] (0xc000a26b00) Data frame received for 1\nI0203 21:44:32.185972    3355 log.go:172] (0xc000754280) (1) Data frame handling\nI0203 21:44:32.185993    3355 log.go:172] (0xc000754280) (1) Data frame sent\nI0203 21:44:32.186016    3355 log.go:172] (0xc000a26b00) (0xc000754280) Stream removed, broadcasting: 1\nI0203 21:44:32.186046    3355 log.go:172] (0xc000a26b00) Go away received\nI0203 21:44:32.186539    3355 log.go:172] (0xc000a26b00) (0xc000754280) Stream removed, broadcasting: 1\nI0203 21:44:32.186576    3355 log.go:172] (0xc000a26b00) (0xc0007f8140) Stream removed, broadcasting: 3\nI0203 21:44:32.186591    3355 log.go:172] (0xc000a26b00) (0xc000860000) Stream removed, broadcasting: 5\n"
Feb  3 21:44:32.193: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:44:32.193: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  3 21:44:42.235: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  3 21:44:52.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9650 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 21:44:52.483: INFO: stderr: "I0203 21:44:52.414501    3375 log.go:172] (0xc00090aa50) (0xc0009ce000) Create stream\nI0203 21:44:52.414586    3375 log.go:172] (0xc00090aa50) (0xc0009ce000) Stream added, broadcasting: 1\nI0203 21:44:52.417671    3375 log.go:172] (0xc00090aa50) Reply frame received for 1\nI0203 21:44:52.417727    3375 log.go:172] (0xc00090aa50) (0xc000a48000) Create stream\nI0203 21:44:52.417742    3375 log.go:172] (0xc00090aa50) (0xc000a48000) Stream added, broadcasting: 3\nI0203 21:44:52.418702    3375 log.go:172] (0xc00090aa50) Reply frame received for 3\nI0203 21:44:52.418743    3375 log.go:172] (0xc00090aa50) (0xc0009ce0a0) Create stream\nI0203 21:44:52.418755    3375 log.go:172] (0xc00090aa50) (0xc0009ce0a0) Stream added, broadcasting: 5\nI0203 21:44:52.419826    3375 log.go:172] (0xc00090aa50) Reply frame received for 5\nI0203 21:44:52.476417    3375 log.go:172] (0xc00090aa50) Data frame received for 5\nI0203 21:44:52.476450    3375 log.go:172] (0xc0009ce0a0) (5) Data frame handling\nI0203 21:44:52.476464    3375 log.go:172] (0xc0009ce0a0) (5) Data frame sent\nI0203 21:44:52.476473    3375 log.go:172] (0xc00090aa50) Data frame received for 5\nI0203 21:44:52.476481    3375 log.go:172] (0xc0009ce0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 21:44:52.476524    3375 log.go:172] (0xc00090aa50) Data frame received for 3\nI0203 21:44:52.476585    3375 log.go:172] (0xc000a48000) (3) Data frame handling\nI0203 21:44:52.476616    3375 log.go:172] (0xc000a48000) (3) Data frame sent\nI0203 21:44:52.476634    3375 log.go:172] (0xc00090aa50) Data frame received for 3\nI0203 21:44:52.476645    3375 log.go:172] (0xc000a48000) (3) Data frame handling\nI0203 21:44:52.477717    3375 log.go:172] (0xc00090aa50) Data frame received for 1\nI0203 21:44:52.477747    3375 log.go:172] (0xc0009ce000) (1) Data frame handling\nI0203 21:44:52.477776    3375 log.go:172] (0xc0009ce000) (1) Data frame sent\nI0203 21:44:52.477802    3375 log.go:172] (0xc00090aa50) (0xc0009ce000) Stream removed, broadcasting: 1\nI0203 21:44:52.477845    3375 log.go:172] (0xc00090aa50) Go away received\nI0203 21:44:52.478257    3375 log.go:172] (0xc00090aa50) (0xc0009ce000) Stream removed, broadcasting: 1\nI0203 21:44:52.478288    3375 log.go:172] (0xc00090aa50) (0xc000a48000) Stream removed, broadcasting: 3\nI0203 21:44:52.478308    3375 log.go:172] (0xc00090aa50) (0xc0009ce0a0) Stream removed, broadcasting: 5\n"
Feb  3 21:44:52.483: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 21:44:52.483: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 21:45:02.501: INFO: Waiting for StatefulSet statefulset-9650/ss2 to complete update
Feb  3 21:45:02.501: INFO: Waiting for Pod statefulset-9650/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 21:45:02.501: INFO: Waiting for Pod statefulset-9650/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 21:45:12.850: INFO: Waiting for StatefulSet statefulset-9650/ss2 to complete update
Feb  3 21:45:12.850: INFO: Waiting for Pod statefulset-9650/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  3 21:45:22.509: INFO: Waiting for StatefulSet statefulset-9650/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  3 21:45:32.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9650 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 21:45:32.752: INFO: stderr: "I0203 21:45:32.632084    3397 log.go:172] (0xc0003c0dc0) (0xc00065dc20) Create stream\nI0203 21:45:32.632135    3397 log.go:172] (0xc0003c0dc0) (0xc00065dc20) Stream added, broadcasting: 1\nI0203 21:45:32.635313    3397 log.go:172] (0xc0003c0dc0) Reply frame received for 1\nI0203 21:45:32.635370    3397 log.go:172] (0xc0003c0dc0) (0xc00065de00) Create stream\nI0203 21:45:32.635384    3397 log.go:172] (0xc0003c0dc0) (0xc00065de00) Stream added, broadcasting: 3\nI0203 21:45:32.636660    3397 log.go:172] (0xc0003c0dc0) Reply frame received for 3\nI0203 21:45:32.636704    3397 log.go:172] (0xc0003c0dc0) (0xc0009c6000) Create stream\nI0203 21:45:32.636719    3397 log.go:172] (0xc0003c0dc0) (0xc0009c6000) Stream added, broadcasting: 5\nI0203 21:45:32.637891    3397 log.go:172] (0xc0003c0dc0) Reply frame received for 5\nI0203 21:45:32.700112    3397 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0203 21:45:32.700135    3397 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0203 21:45:32.700148    3397 log.go:172] (0xc0009c6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 21:45:32.744068    3397 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0203 21:45:32.744210    3397 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0203 21:45:32.744245    3397 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0203 21:45:32.744262    3397 log.go:172] (0xc00065de00) (3) Data frame handling\nI0203 21:45:32.744277    3397 log.go:172] (0xc00065de00) (3) Data frame sent\nI0203 21:45:32.744298    3397 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0203 21:45:32.744323    3397 log.go:172] (0xc00065de00) (3) Data frame handling\nI0203 21:45:32.746128    3397 log.go:172] (0xc0003c0dc0) Data frame received for 1\nI0203 21:45:32.746154    3397 log.go:172] (0xc00065dc20) (1) Data frame handling\nI0203 21:45:32.746169    3397 log.go:172] (0xc00065dc20) (1) Data frame sent\nI0203 21:45:32.746183    3397 log.go:172] (0xc0003c0dc0) (0xc00065dc20) Stream removed, broadcasting: 1\nI0203 21:45:32.746205    3397 log.go:172] (0xc0003c0dc0) Go away received\nI0203 21:45:32.746561    3397 log.go:172] (0xc0003c0dc0) (0xc00065dc20) Stream removed, broadcasting: 1\nI0203 21:45:32.746581    3397 log.go:172] (0xc0003c0dc0) (0xc00065de00) Stream removed, broadcasting: 3\nI0203 21:45:32.746591    3397 log.go:172] (0xc0003c0dc0) (0xc0009c6000) Stream removed, broadcasting: 5\n"
Feb  3 21:45:32.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 21:45:32.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 21:45:42.781: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  3 21:45:52.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9650 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 21:45:53.046: INFO: stderr: "I0203 21:45:52.984212    3418 log.go:172] (0xc000a4f6b0) (0xc000ab6820) Create stream\nI0203 21:45:52.984272    3418 log.go:172] (0xc000a4f6b0) (0xc000ab6820) Stream added, broadcasting: 1\nI0203 21:45:52.988104    3418 log.go:172] (0xc000a4f6b0) Reply frame received for 1\nI0203 21:45:52.988176    3418 log.go:172] (0xc000a4f6b0) (0xc0006905a0) Create stream\nI0203 21:45:52.988201    3418 log.go:172] (0xc000a4f6b0) (0xc0006905a0) Stream added, broadcasting: 3\nI0203 21:45:52.989275    3418 log.go:172] (0xc000a4f6b0) Reply frame received for 3\nI0203 21:45:52.989308    3418 log.go:172] (0xc000a4f6b0) (0xc00050b360) Create stream\nI0203 21:45:52.989317    3418 log.go:172] (0xc000a4f6b0) (0xc00050b360) Stream added, broadcasting: 5\nI0203 21:45:52.990068    3418 log.go:172] (0xc000a4f6b0) Reply frame received for 5\nI0203 21:45:53.038905    3418 log.go:172] (0xc000a4f6b0) Data frame received for 5\nI0203 21:45:53.038953    3418 log.go:172] (0xc00050b360) (5) Data frame handling\nI0203 21:45:53.038968    3418 log.go:172] (0xc00050b360) (5) Data frame sent\nI0203 21:45:53.038979    3418 log.go:172] (0xc000a4f6b0) Data frame received for 5\nI0203 21:45:53.038989    3418 log.go:172] (0xc00050b360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 21:45:53.039041    3418 log.go:172] (0xc000a4f6b0) Data frame received for 3\nI0203 21:45:53.039069    3418 log.go:172] (0xc0006905a0) (3) Data frame handling\nI0203 21:45:53.039085    3418 log.go:172] (0xc0006905a0) (3) Data frame sent\nI0203 21:45:53.039093    3418 log.go:172] (0xc000a4f6b0) Data frame received for 3\nI0203 21:45:53.039098    3418 log.go:172] (0xc0006905a0) (3) Data frame handling\nI0203 21:45:53.040424    3418 log.go:172] (0xc000a4f6b0) Data frame received for 1\nI0203 21:45:53.040448    3418 log.go:172] (0xc000ab6820) (1) Data frame handling\nI0203 21:45:53.040462    3418 log.go:172] (0xc000ab6820) (1) Data frame sent\nI0203 21:45:53.040473    3418 log.go:172] (0xc000a4f6b0) (0xc000ab6820) Stream removed, broadcasting: 1\nI0203 21:45:53.040488    3418 log.go:172] (0xc000a4f6b0) Go away received\nI0203 21:45:53.040955    3418 log.go:172] (0xc000a4f6b0) (0xc000ab6820) Stream removed, broadcasting: 1\nI0203 21:45:53.040978    3418 log.go:172] (0xc000a4f6b0) (0xc0006905a0) Stream removed, broadcasting: 3\nI0203 21:45:53.040988    3418 log.go:172] (0xc000a4f6b0) (0xc00050b360) Stream removed, broadcasting: 5\n"
Feb  3 21:45:53.046: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 21:45:53.046: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 21:46:23.065: INFO: Waiting for StatefulSet statefulset-9650/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 21:46:33.072: INFO: Deleting all statefulset in ns statefulset-9650
Feb  3 21:46:33.075: INFO: Scaling statefulset ss2 to 0
Feb  3 21:47:03.111: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 21:47:03.114: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:03.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9650" for this suite.

• [SLOW TEST:171.363 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":214,"skipped":3696,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:03.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-c07f240c-9d92-430b-b0eb-29f09aea9e7c
STEP: Creating a pod to test consume configMaps
Feb  3 21:47:03.222: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61" in namespace "projected-479" to be "success or failure"
Feb  3 21:47:03.224: INFO: Pod "pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148254ms
Feb  3 21:47:05.229: INFO: Pod "pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006755253s
Feb  3 21:47:07.252: INFO: Pod "pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030199143s
STEP: Saw pod success
Feb  3 21:47:07.252: INFO: Pod "pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61" satisfied condition "success or failure"
Feb  3 21:47:07.255: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 21:47:07.289: INFO: Waiting for pod pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61 to disappear
Feb  3 21:47:07.305: INFO: Pod pod-projected-configmaps-470ea796-fdcf-42ad-a47a-da697c6bca61 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-479" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3700,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:07.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  3 21:47:11.971: INFO: Successfully updated pod "labelsupdate9cb76d92-9432-46d5-8374-3e7032c25856"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:16.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7811" for this suite.

• [SLOW TEST:8.701 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3739,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:16.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb  3 21:47:16.101: INFO: >>> kubeConfig: /root/.kube/config
Feb  3 21:47:19.053: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:29.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1075" for this suite.

• [SLOW TEST:13.717 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":217,"skipped":3741,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:29.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 21:47:29.937: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:29.939: INFO: Number of nodes with available pods: 0
Feb  3 21:47:29.939: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:47:31.015: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:31.018: INFO: Number of nodes with available pods: 0
Feb  3 21:47:31.018: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:47:31.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:31.948: INFO: Number of nodes with available pods: 0
Feb  3 21:47:31.948: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:47:32.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:32.948: INFO: Number of nodes with available pods: 0
Feb  3 21:47:32.948: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:47:33.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:33.948: INFO: Number of nodes with available pods: 0
Feb  3 21:47:33.948: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 21:47:34.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:34.946: INFO: Number of nodes with available pods: 2
Feb  3 21:47:34.946: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  3 21:47:34.959: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:34.995: INFO: Number of nodes with available pods: 1
Feb  3 21:47:34.995: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:47:36.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:36.066: INFO: Number of nodes with available pods: 1
Feb  3 21:47:36.066: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:47:37.014: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:37.018: INFO: Number of nodes with available pods: 1
Feb  3 21:47:37.019: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:47:38.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:38.004: INFO: Number of nodes with available pods: 1
Feb  3 21:47:38.004: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:47:38.999: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 21:47:39.014: INFO: Number of nodes with available pods: 2
Feb  3 21:47:39.014: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8198, will wait for the garbage collector to delete the pods
Feb  3 21:47:39.080: INFO: Deleting DaemonSet.extensions daemon-set took: 6.089653ms
Feb  3 21:47:39.480: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.241962ms
Feb  3 21:47:52.183: INFO: Number of nodes with available pods: 0
Feb  3 21:47:52.183: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 21:47:52.186: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8198/daemonsets","resourceVersion":"6396036"},"items":null}

Feb  3 21:47:52.189: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8198/pods","resourceVersion":"6396036"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:52.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8198" for this suite.

• [SLOW TEST:22.473 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":218,"skipped":3762,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:52.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-6d4d2005-1a4f-497a-ae69-d9621b0174c5
STEP: Creating a pod to test consume configMaps
Feb  3 21:47:52.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-664788b9-e665-4722-9b31-37402742c506" in namespace "configmap-831" to be "success or failure"
Feb  3 21:47:52.325: INFO: Pod "pod-configmaps-664788b9-e665-4722-9b31-37402742c506": Phase="Pending", Reason="", readiness=false. Elapsed: 3.735037ms
Feb  3 21:47:54.329: INFO: Pod "pod-configmaps-664788b9-e665-4722-9b31-37402742c506": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00780148s
Feb  3 21:47:56.333: INFO: Pod "pod-configmaps-664788b9-e665-4722-9b31-37402742c506": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011112887s
STEP: Saw pod success
Feb  3 21:47:56.333: INFO: Pod "pod-configmaps-664788b9-e665-4722-9b31-37402742c506" satisfied condition "success or failure"
Feb  3 21:47:56.335: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-664788b9-e665-4722-9b31-37402742c506 container configmap-volume-test: 
STEP: delete the pod
Feb  3 21:47:56.356: INFO: Waiting for pod pod-configmaps-664788b9-e665-4722-9b31-37402742c506 to disappear
Feb  3 21:47:56.361: INFO: Pod pod-configmaps-664788b9-e665-4722-9b31-37402742c506 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:47:56.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-831" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3789,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:47:56.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7116
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-7116
I0203 21:47:56.565024       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7116, replica count: 2
I0203 21:47:59.615513       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:48:02.615722       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 21:48:02.615: INFO: Creating new exec pod
Feb  3 21:48:07.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7116 execpod5vrmz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb  3 21:48:10.760: INFO: stderr: "I0203 21:48:10.679188    3439 log.go:172] (0xc000aee9a0) (0xc000553d60) Create stream\nI0203 21:48:10.679275    3439 log.go:172] (0xc000aee9a0) (0xc000553d60) Stream added, broadcasting: 1\nI0203 21:48:10.682604    3439 log.go:172] (0xc000aee9a0) Reply frame received for 1\nI0203 21:48:10.682635    3439 log.go:172] (0xc000aee9a0) (0xc000553e00) Create stream\nI0203 21:48:10.682645    3439 log.go:172] (0xc000aee9a0) (0xc000553e00) Stream added, broadcasting: 3\nI0203 21:48:10.683465    3439 log.go:172] (0xc000aee9a0) Reply frame received for 3\nI0203 21:48:10.683511    3439 log.go:172] (0xc000aee9a0) (0xc000dda0a0) Create stream\nI0203 21:48:10.683535    3439 log.go:172] (0xc000aee9a0) (0xc000dda0a0) Stream added, broadcasting: 5\nI0203 21:48:10.684774    3439 log.go:172] (0xc000aee9a0) Reply frame received for 5\nI0203 21:48:10.750490    3439 log.go:172] (0xc000aee9a0) Data frame received for 5\nI0203 21:48:10.750528    3439 log.go:172] (0xc000dda0a0) (5) Data frame handling\nI0203 21:48:10.750550    3439 log.go:172] (0xc000dda0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0203 21:48:10.750949    3439 log.go:172] (0xc000aee9a0) Data frame received for 5\nI0203 21:48:10.750975    3439 log.go:172] (0xc000dda0a0) (5) Data frame handling\nI0203 21:48:10.750988    3439 log.go:172] (0xc000dda0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0203 21:48:10.751008    3439 log.go:172] (0xc000aee9a0) Data frame received for 3\nI0203 21:48:10.751029    3439 log.go:172] (0xc000553e00) (3) Data frame handling\nI0203 21:48:10.751062    3439 log.go:172] (0xc000aee9a0) Data frame received for 5\nI0203 21:48:10.751092    3439 log.go:172] (0xc000dda0a0) (5) Data frame handling\nI0203 21:48:10.753123    3439 log.go:172] (0xc000aee9a0) Data frame received for 1\nI0203 21:48:10.753170    3439 log.go:172] (0xc000553d60) (1) Data frame handling\nI0203 21:48:10.753204    3439 log.go:172] (0xc000553d60) (1) Data frame sent\nI0203 21:48:10.753229    3439 log.go:172] (0xc000aee9a0) (0xc000553d60) Stream removed, broadcasting: 1\nI0203 21:48:10.753346    3439 log.go:172] (0xc000aee9a0) Go away received\nI0203 21:48:10.753812    3439 log.go:172] (0xc000aee9a0) (0xc000553d60) Stream removed, broadcasting: 1\nI0203 21:48:10.753833    3439 log.go:172] (0xc000aee9a0) (0xc000553e00) Stream removed, broadcasting: 3\nI0203 21:48:10.753845    3439 log.go:172] (0xc000aee9a0) (0xc000dda0a0) Stream removed, broadcasting: 5\n"
Feb  3 21:48:10.760: INFO: stdout: ""
Feb  3 21:48:10.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7116 execpod5vrmz -- /bin/sh -x -c nc -zv -t -w 2 10.96.86.150 80'
Feb  3 21:48:10.974: INFO: stderr: "I0203 21:48:10.879024    3474 log.go:172] (0xc000a3c580) (0xc000ad6000) Create stream\nI0203 21:48:10.879080    3474 log.go:172] (0xc000a3c580) (0xc000ad6000) Stream added, broadcasting: 1\nI0203 21:48:10.881924    3474 log.go:172] (0xc000a3c580) Reply frame received for 1\nI0203 21:48:10.881973    3474 log.go:172] (0xc000a3c580) (0xc0009be000) Create stream\nI0203 21:48:10.881992    3474 log.go:172] (0xc000a3c580) (0xc0009be000) Stream added, broadcasting: 3\nI0203 21:48:10.882818    3474 log.go:172] (0xc000a3c580) Reply frame received for 3\nI0203 21:48:10.882852    3474 log.go:172] (0xc000a3c580) (0xc0006fda40) Create stream\nI0203 21:48:10.882862    3474 log.go:172] (0xc000a3c580) (0xc0006fda40) Stream added, broadcasting: 5\nI0203 21:48:10.883750    3474 log.go:172] (0xc000a3c580) Reply frame received for 5\nI0203 21:48:10.965935    3474 log.go:172] (0xc000a3c580) Data frame received for 3\nI0203 21:48:10.965983    3474 log.go:172] (0xc0009be000) (3) Data frame handling\nI0203 21:48:10.966004    3474 log.go:172] (0xc000a3c580) Data frame received for 5\nI0203 21:48:10.966016    3474 log.go:172] (0xc0006fda40) (5) Data frame handling\nI0203 21:48:10.966028    3474 log.go:172] (0xc0006fda40) (5) Data frame sent\nI0203 21:48:10.966050    3474 log.go:172] (0xc000a3c580) Data frame received for 5\nI0203 21:48:10.966065    3474 log.go:172] (0xc0006fda40) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.86.150 80\nConnection to 10.96.86.150 80 port [tcp/http] succeeded!\nI0203 21:48:10.967728    3474 log.go:172] (0xc000a3c580) Data frame received for 1\nI0203 21:48:10.967769    3474 log.go:172] (0xc000ad6000) (1) Data frame handling\nI0203 21:48:10.967800    3474 log.go:172] (0xc000ad6000) (1) Data frame sent\nI0203 21:48:10.967826    3474 log.go:172] (0xc000a3c580) (0xc000ad6000) Stream removed, broadcasting: 1\nI0203 21:48:10.967856    3474 log.go:172] (0xc000a3c580) Go away received\nI0203 21:48:10.968304    3474 log.go:172] (0xc000a3c580) (0xc000ad6000) Stream removed, broadcasting: 1\nI0203 21:48:10.968329    3474 log.go:172] (0xc000a3c580) (0xc0009be000) Stream removed, broadcasting: 3\nI0203 21:48:10.968342    3474 log.go:172] (0xc000a3c580) (0xc0006fda40) Stream removed, broadcasting: 5\n"
Feb  3 21:48:10.974: INFO: stdout: ""
Feb  3 21:48:10.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7116 execpod5vrmz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30075'
Feb  3 21:48:11.180: INFO: stderr: "I0203 21:48:11.100901    3494 log.go:172] (0xc0000f7550) (0xc000880640) Create stream\nI0203 21:48:11.100958    3494 log.go:172] (0xc0000f7550) (0xc000880640) Stream added, broadcasting: 1\nI0203 21:48:11.104270    3494 log.go:172] (0xc0000f7550) Reply frame received for 1\nI0203 21:48:11.104301    3494 log.go:172] (0xc0000f7550) (0xc0004fe8c0) Create stream\nI0203 21:48:11.104309    3494 log.go:172] (0xc0000f7550) (0xc0004fe8c0) Stream added, broadcasting: 3\nI0203 21:48:11.105505    3494 log.go:172] (0xc0000f7550) Reply frame received for 3\nI0203 21:48:11.105585    3494 log.go:172] (0xc0000f7550) (0xc000617cc0) Create stream\nI0203 21:48:11.105634    3494 log.go:172] (0xc0000f7550) (0xc000617cc0) Stream added, broadcasting: 5\nI0203 21:48:11.106611    3494 log.go:172] (0xc0000f7550) Reply frame received for 5\nI0203 21:48:11.174559    3494 log.go:172] (0xc0000f7550) Data frame received for 3\nI0203 21:48:11.174586    3494 log.go:172] (0xc0004fe8c0) (3) Data frame handling\nI0203 21:48:11.174609    3494 log.go:172] (0xc0000f7550) Data frame received for 5\nI0203 21:48:11.174614    3494 log.go:172] (0xc000617cc0) (5) Data frame handling\nI0203 21:48:11.174621    3494 log.go:172] (0xc000617cc0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 30075\nConnection to 172.18.0.6 30075 port [tcp/30075] succeeded!\nI0203 21:48:11.174650    3494 log.go:172] (0xc0000f7550) Data frame received for 5\nI0203 21:48:11.174657    3494 log.go:172] (0xc000617cc0) (5) Data frame handling\nI0203 21:48:11.176094    3494 log.go:172] (0xc0000f7550) Data frame received for 1\nI0203 21:48:11.176171    3494 log.go:172] (0xc000880640) (1) Data frame handling\nI0203 21:48:11.176199    3494 log.go:172] (0xc000880640) (1) Data frame sent\nI0203 21:48:11.176261    3494 log.go:172] (0xc0000f7550) (0xc000880640) Stream removed, broadcasting: 1\nI0203 21:48:11.176294    3494 log.go:172] (0xc0000f7550) Go away received\nI0203 21:48:11.176612    3494 log.go:172] (0xc0000f7550) (0xc000880640) Stream removed, broadcasting: 1\nI0203 21:48:11.176624    3494 log.go:172] (0xc0000f7550) (0xc0004fe8c0) Stream removed, broadcasting: 3\nI0203 21:48:11.176629    3494 log.go:172] (0xc0000f7550) (0xc000617cc0) Stream removed, broadcasting: 5\n"
Feb  3 21:48:11.180: INFO: stdout: ""
Feb  3 21:48:11.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7116 execpod5vrmz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 30075'
Feb  3 21:48:11.404: INFO: stderr: "I0203 21:48:11.319333    3514 log.go:172] (0xc0000f62c0) (0xc00081fa40) Create stream\nI0203 21:48:11.319419    3514 log.go:172] (0xc0000f62c0) (0xc00081fa40) Stream added, broadcasting: 1\nI0203 21:48:11.321425    3514 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0203 21:48:11.321470    3514 log.go:172] (0xc0000f62c0) (0xc0005514a0) Create stream\nI0203 21:48:11.321484    3514 log.go:172] (0xc0000f62c0) (0xc0005514a0) Stream added, broadcasting: 3\nI0203 21:48:11.322388    3514 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0203 21:48:11.322422    3514 log.go:172] (0xc0000f62c0) (0xc00090e000) Create stream\nI0203 21:48:11.322435    3514 log.go:172] (0xc0000f62c0) (0xc00090e000) Stream added, broadcasting: 5\nI0203 21:48:11.323229    3514 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0203 21:48:11.396405    3514 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0203 21:48:11.396432    3514 log.go:172] (0xc00090e000) (5) Data frame handling\nI0203 21:48:11.396444    3514 log.go:172] (0xc00090e000) (5) Data frame sent\nI0203 21:48:11.396450    3514 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0203 21:48:11.396454    3514 log.go:172] (0xc00090e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.5 30075\nConnection to 172.18.0.5 30075 port [tcp/30075] succeeded!\nI0203 21:48:11.396522    3514 log.go:172] (0xc00090e000) (5) Data frame sent\nI0203 21:48:11.396713    3514 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0203 21:48:11.396727    3514 log.go:172] (0xc00090e000) (5) Data frame handling\nI0203 21:48:11.397027    3514 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0203 21:48:11.397045    3514 log.go:172] (0xc0005514a0) (3) Data frame handling\nI0203 21:48:11.398227    3514 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0203 21:48:11.398248    3514 log.go:172] (0xc00081fa40) (1) Data frame handling\nI0203 21:48:11.398261    3514 log.go:172] (0xc00081fa40) (1) Data frame sent\nI0203 21:48:11.398274    3514 log.go:172] (0xc0000f62c0) (0xc00081fa40) Stream removed, broadcasting: 1\nI0203 21:48:11.398286    3514 log.go:172] (0xc0000f62c0) Go away received\nI0203 21:48:11.398518    3514 log.go:172] (0xc0000f62c0) (0xc00081fa40) Stream removed, broadcasting: 1\nI0203 21:48:11.398529    3514 log.go:172] (0xc0000f62c0) (0xc0005514a0) Stream removed, broadcasting: 3\nI0203 21:48:11.398535    3514 log.go:172] (0xc0000f62c0) (0xc00090e000) Stream removed, broadcasting: 5\n"
Feb  3 21:48:11.404: INFO: stdout: ""
Feb  3 21:48:11.404: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:11.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7116" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.117 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":220,"skipped":3789,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:11.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 21:48:11.567: INFO: Waiting up to 5m0s for pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53" in namespace "downward-api-1905" to be "success or failure"
Feb  3 21:48:11.583: INFO: Pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53": Phase="Pending", Reason="", readiness=false. Elapsed: 16.128964ms
Feb  3 21:48:13.586: INFO: Pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019384481s
Feb  3 21:48:15.595: INFO: Pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53": Phase="Running", Reason="", readiness=true. Elapsed: 4.02853122s
Feb  3 21:48:17.599: INFO: Pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032316216s
STEP: Saw pod success
Feb  3 21:48:17.599: INFO: Pod "downward-api-01c58176-ee80-4c00-8adc-74c718b30b53" satisfied condition "success or failure"
Feb  3 21:48:17.602: INFO: Trying to get logs from node jerma-worker2 pod downward-api-01c58176-ee80-4c00-8adc-74c718b30b53 container dapi-container: 
STEP: delete the pod
Feb  3 21:48:17.794: INFO: Waiting for pod downward-api-01c58176-ee80-4c00-8adc-74c718b30b53 to disappear
Feb  3 21:48:17.799: INFO: Pod downward-api-01c58176-ee80-4c00-8adc-74c718b30b53 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:17.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1905" for this suite.

• [SLOW TEST:6.320 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3791,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:17.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-c48874a5-0b8b-4def-b860-e77d79ed0a22 in namespace container-probe-1057
Feb  3 21:48:21.924: INFO: Started pod liveness-c48874a5-0b8b-4def-b860-e77d79ed0a22 in namespace container-probe-1057
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 21:48:21.927: INFO: Initial restart count of pod liveness-c48874a5-0b8b-4def-b860-e77d79ed0a22 is 0
Feb  3 21:48:39.965: INFO: Restart count of pod container-probe-1057/liveness-c48874a5-0b8b-4def-b860-e77d79ed0a22 is now 1 (18.038476165s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1057" for this suite.

• [SLOW TEST:22.224 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:40.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-210ce81c-4a58-4222-bd63-e88269f76c83
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:40.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9383" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":223,"skipped":3835,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:40.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-6f453446-29cd-4329-9962-576764fc16ea
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:46.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1140" for this suite.

• [SLOW TEST:6.188 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3852,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:46.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  3 21:48:46.839: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  3 21:48:55.919: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:48:55.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-666" for this suite.

• [SLOW TEST:9.228 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3853,"failed":0}
SS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:48:55.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Feb  3 21:48:56.010: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7972" to be "success or failure"
Feb  3 21:48:56.014: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74102ms
Feb  3 21:48:58.018: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007154866s
Feb  3 21:49:00.022: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011125995s
Feb  3 21:49:02.026: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015365298s
STEP: Saw pod success
Feb  3 21:49:02.026: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  3 21:49:02.030: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  3 21:49:02.077: INFO: Waiting for pod pod-host-path-test to disappear
Feb  3 21:49:02.087: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:02.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7972" for this suite.

• [SLOW TEST:6.168 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3855,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:02.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:49:02.212: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  3 21:49:02.225: INFO: Number of nodes with available pods: 0
Feb  3 21:49:02.225: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  3 21:49:02.292: INFO: Number of nodes with available pods: 0
Feb  3 21:49:02.292: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:03.296: INFO: Number of nodes with available pods: 0
Feb  3 21:49:03.296: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:04.296: INFO: Number of nodes with available pods: 0
Feb  3 21:49:04.296: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:05.344: INFO: Number of nodes with available pods: 1
Feb  3 21:49:05.344: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  3 21:49:05.382: INFO: Number of nodes with available pods: 1
Feb  3 21:49:05.382: INFO: Number of running nodes: 0, number of available pods: 1
Feb  3 21:49:06.386: INFO: Number of nodes with available pods: 0
Feb  3 21:49:06.386: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  3 21:49:06.394: INFO: Number of nodes with available pods: 0
Feb  3 21:49:06.394: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:07.424: INFO: Number of nodes with available pods: 0
Feb  3 21:49:07.424: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:08.399: INFO: Number of nodes with available pods: 0
Feb  3 21:49:08.399: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:09.398: INFO: Number of nodes with available pods: 0
Feb  3 21:49:09.398: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:10.398: INFO: Number of nodes with available pods: 0
Feb  3 21:49:10.398: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:11.398: INFO: Number of nodes with available pods: 0
Feb  3 21:49:11.398: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:12.398: INFO: Number of nodes with available pods: 0
Feb  3 21:49:12.398: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:13.398: INFO: Number of nodes with available pods: 0
Feb  3 21:49:13.398: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:14.408: INFO: Number of nodes with available pods: 0
Feb  3 21:49:14.408: INFO: Node jerma-worker2 is running more than one daemon pod
Feb  3 21:49:15.398: INFO: Number of nodes with available pods: 1
Feb  3 21:49:15.398: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6273, will wait for the garbage collector to delete the pods
Feb  3 21:49:15.462: INFO: Deleting DaemonSet.extensions daemon-set took: 5.758218ms
Feb  3 21:49:15.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.627776ms
Feb  3 21:49:22.166: INFO: Number of nodes with available pods: 0
Feb  3 21:49:22.166: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 21:49:22.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6273/daemonsets","resourceVersion":"6396628"},"items":null}

Feb  3 21:49:22.170: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6273/pods","resourceVersion":"6396628"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:22.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6273" for this suite.

• [SLOW TEST:20.119 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":227,"skipped":3856,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:22.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:29.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4272" for this suite.

• [SLOW TEST:7.123 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":228,"skipped":3885,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:29.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:49:29.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0" in namespace "downward-api-7565" to be "success or failure"
Feb  3 21:49:29.430: INFO: Pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.118127ms
Feb  3 21:49:31.446: INFO: Pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027267428s
Feb  3 21:49:33.450: INFO: Pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.030970192s
Feb  3 21:49:35.455: INFO: Pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035509208s
STEP: Saw pod success
Feb  3 21:49:35.455: INFO: Pod "downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0" satisfied condition "success or failure"
Feb  3 21:49:35.458: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0 container client-container: 
STEP: delete the pod
Feb  3 21:49:35.488: INFO: Waiting for pod downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0 to disappear
Feb  3 21:49:35.496: INFO: Pod downwardapi-volume-45bfa2e2-43c0-4518-9e22-2daca60818a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:35.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7565" for this suite.

• [SLOW TEST:6.162 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:35.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 21:49:35.630: INFO: Waiting up to 5m0s for pod "pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb" in namespace "emptydir-24" to be "success or failure"
Feb  3 21:49:35.642: INFO: Pod "pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.416668ms
Feb  3 21:49:37.646: INFO: Pod "pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01644995s
Feb  3 21:49:39.662: INFO: Pod "pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032133228s
STEP: Saw pod success
Feb  3 21:49:39.662: INFO: Pod "pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb" satisfied condition "success or failure"
Feb  3 21:49:39.665: INFO: Trying to get logs from node jerma-worker pod pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb container test-container: 
STEP: delete the pod
Feb  3 21:49:39.682: INFO: Waiting for pod pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb to disappear
Feb  3 21:49:39.686: INFO: Pod pod-1cced685-b7b5-4bf5-ad4e-eb10dd1be5eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:39.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-24" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3920,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:39.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-b4261b14-73ec-48cd-b5c8-3f08956b2e1b
STEP: Creating a pod to test consume configMaps
Feb  3 21:49:39.970: INFO: Waiting up to 5m0s for pod "pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91" in namespace "configmap-253" to be "success or failure"
Feb  3 21:49:40.087: INFO: Pod "pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91": Phase="Pending", Reason="", readiness=false. Elapsed: 116.966462ms
Feb  3 21:49:42.090: INFO: Pod "pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120668703s
Feb  3 21:49:44.094: INFO: Pod "pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12413652s
STEP: Saw pod success
Feb  3 21:49:44.094: INFO: Pod "pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91" satisfied condition "success or failure"
Feb  3 21:49:44.096: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91 container configmap-volume-test: 
STEP: delete the pod
Feb  3 21:49:44.112: INFO: Waiting for pod pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91 to disappear
Feb  3 21:49:44.128: INFO: Pod pod-configmaps-9179e4cf-3020-47bb-ab3c-f3afb7110f91 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:44.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-253" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3924,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:44.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:49:44.220: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:45.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2307" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":232,"skipped":3936,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:45.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:49:45.695: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824" in namespace "security-context-test-4707" to be "success or failure"
Feb  3 21:49:45.699: INFO: Pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824": Phase="Pending", Reason="", readiness=false. Elapsed: 3.856482ms
Feb  3 21:49:47.703: INFO: Pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908396s
Feb  3 21:49:49.707: INFO: Pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011731169s
Feb  3 21:49:49.707: INFO: Pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824" satisfied condition "success or failure"
Feb  3 21:49:49.713: INFO: Got logs for pod "busybox-privileged-false-62226299-c588-4ae1-947b-c620ad4bf824": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:49:49.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4707" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3952,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:49:49.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4804
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 21:49:49.855: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 21:50:13.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.104:8080/dial?request=hostname&protocol=http&host=10.244.2.103&port=8080&tries=1'] Namespace:pod-network-test-4804 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:50:13.989: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:50:14.033397       6 log.go:172] (0xc002a4f3f0) (0xc001ff1360) Create stream
I0203 21:50:14.033424       6 log.go:172] (0xc002a4f3f0) (0xc001ff1360) Stream added, broadcasting: 1
I0203 21:50:14.035604       6 log.go:172] (0xc002a4f3f0) Reply frame received for 1
I0203 21:50:14.035650       6 log.go:172] (0xc002a4f3f0) (0xc001ff1400) Create stream
I0203 21:50:14.035660       6 log.go:172] (0xc002a4f3f0) (0xc001ff1400) Stream added, broadcasting: 3
I0203 21:50:14.036653       6 log.go:172] (0xc002a4f3f0) Reply frame received for 3
I0203 21:50:14.036686       6 log.go:172] (0xc002a4f3f0) (0xc001ff1720) Create stream
I0203 21:50:14.036701       6 log.go:172] (0xc002a4f3f0) (0xc001ff1720) Stream added, broadcasting: 5
I0203 21:50:14.037688       6 log.go:172] (0xc002a4f3f0) Reply frame received for 5
I0203 21:50:14.136622       6 log.go:172] (0xc002a4f3f0) Data frame received for 3
I0203 21:50:14.136655       6 log.go:172] (0xc001ff1400) (3) Data frame handling
I0203 21:50:14.136684       6 log.go:172] (0xc001ff1400) (3) Data frame sent
I0203 21:50:14.137291       6 log.go:172] (0xc002a4f3f0) Data frame received for 5
I0203 21:50:14.137331       6 log.go:172] (0xc001ff1720) (5) Data frame handling
I0203 21:50:14.137392       6 log.go:172] (0xc002a4f3f0) Data frame received for 3
I0203 21:50:14.137442       6 log.go:172] (0xc001ff1400) (3) Data frame handling
I0203 21:50:14.139049       6 log.go:172] (0xc002a4f3f0) Data frame received for 1
I0203 21:50:14.139096       6 log.go:172] (0xc001ff1360) (1) Data frame handling
I0203 21:50:14.139179       6 log.go:172] (0xc001ff1360) (1) Data frame sent
I0203 21:50:14.139200       6 log.go:172] (0xc002a4f3f0) (0xc001ff1360) Stream removed, broadcasting: 1
I0203 21:50:14.139216       6 log.go:172] (0xc002a4f3f0) Go away received
I0203 21:50:14.139357       6 log.go:172] (0xc002a4f3f0) (0xc001ff1360) Stream removed, broadcasting: 1
I0203 21:50:14.139384       6 log.go:172] (0xc002a4f3f0) (0xc001ff1400) Stream removed, broadcasting: 3
I0203 21:50:14.139403       6 log.go:172] (0xc002a4f3f0) (0xc001ff1720) Stream removed, broadcasting: 5
Feb  3 21:50:14.139: INFO: Waiting for responses: map[]
Feb  3 21:50:14.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.104:8080/dial?request=hostname&protocol=http&host=10.244.1.109&port=8080&tries=1'] Namespace:pod-network-test-4804 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 21:50:14.142: INFO: >>> kubeConfig: /root/.kube/config
I0203 21:50:14.173884       6 log.go:172] (0xc002a4f970) (0xc001ff1d60) Create stream
I0203 21:50:14.173918       6 log.go:172] (0xc002a4f970) (0xc001ff1d60) Stream added, broadcasting: 1
I0203 21:50:14.180059       6 log.go:172] (0xc002a4f970) Reply frame received for 1
I0203 21:50:14.180120       6 log.go:172] (0xc002a4f970) (0xc0020bdf40) Create stream
I0203 21:50:14.180145       6 log.go:172] (0xc002a4f970) (0xc0020bdf40) Stream added, broadcasting: 3
I0203 21:50:14.181261       6 log.go:172] (0xc002a4f970) Reply frame received for 3
I0203 21:50:14.181290       6 log.go:172] (0xc002a4f970) (0xc00018a460) Create stream
I0203 21:50:14.181304       6 log.go:172] (0xc002a4f970) (0xc00018a460) Stream added, broadcasting: 5
I0203 21:50:14.182214       6 log.go:172] (0xc002a4f970) Reply frame received for 5
I0203 21:50:14.259605       6 log.go:172] (0xc002a4f970) Data frame received for 3
I0203 21:50:14.259629       6 log.go:172] (0xc0020bdf40) (3) Data frame handling
I0203 21:50:14.259647       6 log.go:172] (0xc0020bdf40) (3) Data frame sent
I0203 21:50:14.260556       6 log.go:172] (0xc002a4f970) Data frame received for 5
I0203 21:50:14.260579       6 log.go:172] (0xc00018a460) (5) Data frame handling
I0203 21:50:14.260602       6 log.go:172] (0xc002a4f970) Data frame received for 3
I0203 21:50:14.260634       6 log.go:172] (0xc0020bdf40) (3) Data frame handling
I0203 21:50:14.262409       6 log.go:172] (0xc002a4f970) Data frame received for 1
I0203 21:50:14.262426       6 log.go:172] (0xc001ff1d60) (1) Data frame handling
I0203 21:50:14.262442       6 log.go:172] (0xc001ff1d60) (1) Data frame sent
I0203 21:50:14.262520       6 log.go:172] (0xc002a4f970) (0xc001ff1d60) Stream removed, broadcasting: 1
I0203 21:50:14.262630       6 log.go:172] (0xc002a4f970) (0xc001ff1d60) Stream removed, broadcasting: 1
I0203 21:50:14.262645       6 log.go:172] (0xc002a4f970) (0xc0020bdf40) Stream removed, broadcasting: 3
I0203 21:50:14.262752       6 log.go:172] (0xc002a4f970) Go away received
I0203 21:50:14.262927       6 log.go:172] (0xc002a4f970) (0xc00018a460) Stream removed, broadcasting: 5
Feb  3 21:50:14.262: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:50:14.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4804" for this suite.

• [SLOW TEST:24.549 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3958,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:50:14.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-4862aa7c-697d-41cb-bf77-ed08869ad334 in namespace container-probe-7039
Feb  3 21:50:18.433: INFO: Started pod busybox-4862aa7c-697d-41cb-bf77-ed08869ad334 in namespace container-probe-7039
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 21:50:18.437: INFO: Initial restart count of pod busybox-4862aa7c-697d-41cb-bf77-ed08869ad334 is 0
Feb  3 21:51:08.555: INFO: Restart count of pod container-probe-7039/busybox-4862aa7c-697d-41cb-bf77-ed08869ad334 is now 1 (50.118675178s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:08.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7039" for this suite.

• [SLOW TEST:54.321 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3967,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:08.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:51:08.748: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:19.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-854" for this suite.

• [SLOW TEST:11.121 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":237,"skipped":4040,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:19.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:51:20.853: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:51:22.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:51:24.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985880, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:51:27.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:51:27.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8654-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:29.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5322" for this suite.
STEP: Destroying namespace "webhook-5322-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.409 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":238,"skipped":4043,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:29.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  3 21:51:34.120: INFO: Successfully updated pod "annotationupdate5002ee9a-dca1-4251-a7c3-5fc4eaef4b5f"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:36.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8050" for this suite.

• [SLOW TEST:6.800 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4057,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:36.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-437e0c2c-f843-4043-9a08-aa1dca029fec
STEP: Creating a pod to test consume secrets
Feb  3 21:51:36.283: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee" in namespace "projected-6757" to be "success or failure"
Feb  3 21:51:36.297: INFO: Pod "pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee": Phase="Pending", Reason="", readiness=false. Elapsed: 13.91255ms
Feb  3 21:51:38.308: INFO: Pod "pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024294274s
Feb  3 21:51:40.326: INFO: Pod "pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042552387s
STEP: Saw pod success
Feb  3 21:51:40.326: INFO: Pod "pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee" satisfied condition "success or failure"
Feb  3 21:51:40.352: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 21:51:40.375: INFO: Waiting for pod pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee to disappear
Feb  3 21:51:40.395: INFO: Pod pod-projected-secrets-67fd7d2d-ae36-488e-a96d-8006e54a4eee no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:40.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6757" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4060,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:40.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:51:40.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b" in namespace "downward-api-569" to be "success or failure"
Feb  3 21:51:40.487: INFO: Pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672895ms
Feb  3 21:51:42.599: INFO: Pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117865364s
Feb  3 21:51:44.602: INFO: Pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b": Phase="Running", Reason="", readiness=true. Elapsed: 4.121601557s
Feb  3 21:51:46.610: INFO: Pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129544117s
STEP: Saw pod success
Feb  3 21:51:46.610: INFO: Pod "downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b" satisfied condition "success or failure"
Feb  3 21:51:46.613: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b container client-container: 
STEP: delete the pod
Feb  3 21:51:46.630: INFO: Waiting for pod downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b to disappear
Feb  3 21:51:46.659: INFO: Pod downwardapi-volume-e2c8c713-2d06-4f9a-9d90-64793303334b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:51:46.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-569" for this suite.

• [SLOW TEST:6.264 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4068,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:51:46.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-154
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-154
STEP: creating replication controller externalsvc in namespace services-154
I0203 21:51:46.916502       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-154, replica count: 2
I0203 21:51:49.966916       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:51:52.967283       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb  3 21:51:53.003: INFO: Creating new exec pod
Feb  3 21:51:57.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-154 execpod6f2tn -- /bin/sh -x -c nslookup clusterip-service'
Feb  3 21:51:57.446: INFO: stderr: "I0203 21:51:57.328172    3535 log.go:172] (0xc0000f8a50) (0xc00062bae0) Create stream\nI0203 21:51:57.328248    3535 log.go:172] (0xc0000f8a50) (0xc00062bae0) Stream added, broadcasting: 1\nI0203 21:51:57.330561    3535 log.go:172] (0xc0000f8a50) Reply frame received for 1\nI0203 21:51:57.330621    3535 log.go:172] (0xc0000f8a50) (0xc00002c000) Create stream\nI0203 21:51:57.330641    3535 log.go:172] (0xc0000f8a50) (0xc00002c000) Stream added, broadcasting: 3\nI0203 21:51:57.331434    3535 log.go:172] (0xc0000f8a50) Reply frame received for 3\nI0203 21:51:57.331473    3535 log.go:172] (0xc0000f8a50) (0xc00062bd60) Create stream\nI0203 21:51:57.331485    3535 log.go:172] (0xc0000f8a50) (0xc00062bd60) Stream added, broadcasting: 5\nI0203 21:51:57.332221    3535 log.go:172] (0xc0000f8a50) Reply frame received for 5\nI0203 21:51:57.425382    3535 log.go:172] (0xc0000f8a50) Data frame received for 5\nI0203 21:51:57.425413    3535 log.go:172] (0xc00062bd60) (5) Data frame handling\nI0203 21:51:57.425434    3535 log.go:172] (0xc00062bd60) (5) Data frame sent\n+ nslookup clusterip-service\nI0203 21:51:57.435994    3535 log.go:172] (0xc0000f8a50) Data frame received for 3\nI0203 21:51:57.436031    3535 log.go:172] (0xc00002c000) (3) Data frame handling\nI0203 21:51:57.436064    3535 log.go:172] (0xc00002c000) (3) Data frame sent\nI0203 21:51:57.437323    3535 log.go:172] (0xc0000f8a50) Data frame received for 3\nI0203 21:51:57.437362    3535 log.go:172] (0xc00002c000) (3) Data frame handling\nI0203 21:51:57.437381    3535 log.go:172] (0xc00002c000) (3) Data frame sent\nI0203 21:51:57.437898    3535 log.go:172] (0xc0000f8a50) Data frame received for 3\nI0203 21:51:57.437923    3535 log.go:172] (0xc00002c000) (3) Data frame handling\nI0203 21:51:57.438015    3535 log.go:172] (0xc0000f8a50) Data frame received for 5\nI0203 21:51:57.438039    3535 log.go:172] (0xc00062bd60) (5) Data frame handling\nI0203 21:51:57.440074    3535 log.go:172] (0xc0000f8a50) Data frame received for 1\nI0203 21:51:57.440095    3535 log.go:172] (0xc00062bae0) (1) Data frame handling\nI0203 21:51:57.440116    3535 log.go:172] (0xc00062bae0) (1) Data frame sent\nI0203 21:51:57.440231    3535 log.go:172] (0xc0000f8a50) (0xc00062bae0) Stream removed, broadcasting: 1\nI0203 21:51:57.440281    3535 log.go:172] (0xc0000f8a50) Go away received\nI0203 21:51:57.440544    3535 log.go:172] (0xc0000f8a50) (0xc00062bae0) Stream removed, broadcasting: 1\nI0203 21:51:57.440564    3535 log.go:172] (0xc0000f8a50) (0xc00002c000) Stream removed, broadcasting: 3\nI0203 21:51:57.440574    3535 log.go:172] (0xc0000f8a50) (0xc00062bd60) Stream removed, broadcasting: 5\n"
Feb  3 21:51:57.446: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-154.svc.cluster.local\tcanonical name = externalsvc.services-154.svc.cluster.local.\nName:\texternalsvc.services-154.svc.cluster.local\nAddress: 10.96.150.237\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-154, will wait for the garbage collector to delete the pods
Feb  3 21:51:57.517: INFO: Deleting ReplicationController externalsvc took: 4.372247ms
Feb  3 21:51:57.918: INFO: Terminating ReplicationController externalsvc pods took: 400.293446ms
Feb  3 21:52:02.339: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:02.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-154" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:15.757 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":242,"skipped":4080,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:02.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  3 21:52:07.037: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-258 pod-service-account-178b72d9-4ff9-49fd-b0dd-c366597a187b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  3 21:52:07.306: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-258 pod-service-account-178b72d9-4ff9-49fd-b0dd-c366597a187b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  3 21:52:07.511: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-258 pod-service-account-178b72d9-4ff9-49fd-b0dd-c366597a187b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:07.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-258" for this suite.

• [SLOW TEST:5.368 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":243,"skipped":4105,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:07.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  3 21:52:07.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2459'
Feb  3 21:52:07.991: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 21:52:07.992: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb  3 21:52:08.055: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-fh7j7]
Feb  3 21:52:08.055: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-fh7j7" in namespace "kubectl-2459" to be "running and ready"
Feb  3 21:52:08.057: INFO: Pod "e2e-test-httpd-rc-fh7j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37212ms
Feb  3 21:52:10.061: INFO: Pod "e2e-test-httpd-rc-fh7j7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005760103s
Feb  3 21:52:12.065: INFO: Pod "e2e-test-httpd-rc-fh7j7": Phase="Running", Reason="", readiness=true. Elapsed: 4.009915558s
Feb  3 21:52:12.065: INFO: Pod "e2e-test-httpd-rc-fh7j7" satisfied condition "running and ready"
Feb  3 21:52:12.065: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-fh7j7]
Feb  3 21:52:12.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2459'
Feb  3 21:52:12.191: INFO: stderr: ""
Feb  3 21:52:12.191: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.114. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.114. Set the 'ServerName' directive globally to suppress this message\n[Wed Feb 03 21:52:10.950586 2021] [mpm_event:notice] [pid 1:tid 140206117579624] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Feb 03 21:52:10.950653 2021] [core:notice] [pid 1:tid 140206117579624] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Feb  3 21:52:12.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2459'
Feb  3 21:52:12.291: INFO: stderr: ""
Feb  3 21:52:12.291: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2459" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":244,"skipped":4125,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:12.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ad66353a-e9f4-40ff-a20f-4e339992954e
STEP: Creating a pod to test consume secrets
Feb  3 21:52:12.526: INFO: Waiting up to 5m0s for pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2" in namespace "secrets-5016" to be "success or failure"
Feb  3 21:52:12.543: INFO: Pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.811822ms
Feb  3 21:52:14.547: INFO: Pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020872292s
Feb  3 21:52:16.636: INFO: Pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109596657s
Feb  3 21:52:18.640: INFO: Pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113781307s
STEP: Saw pod success
Feb  3 21:52:18.640: INFO: Pod "pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2" satisfied condition "success or failure"
Feb  3 21:52:18.643: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2 container secret-volume-test: 
STEP: delete the pod
Feb  3 21:52:18.689: INFO: Waiting for pod pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2 to disappear
Feb  3 21:52:18.698: INFO: Pod pod-secrets-1cf099a6-920a-4743-870a-4bcf07c1e0f2 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:18.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5016" for this suite.
STEP: Destroying namespace "secret-namespace-5322" for this suite.

• [SLOW TEST:6.413 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4144,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:18.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:52:19.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:52:21.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:52:23.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985939, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:52:26.366: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:52:26.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2231-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:27.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5670" for this suite.
STEP: Destroying namespace "webhook-5670-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.023 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":246,"skipped":4144,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:27.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:52:27.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1" in namespace "projected-6706" to be "success or failure"
Feb  3 21:52:28.226: INFO: Pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1": Phase="Pending", Reason="", readiness=false. Elapsed: 332.993244ms
Feb  3 21:52:30.234: INFO: Pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341064829s
Feb  3 21:52:32.238: INFO: Pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.34555597s
Feb  3 21:52:34.242: INFO: Pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.349611238s
STEP: Saw pod success
Feb  3 21:52:34.242: INFO: Pod "downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1" satisfied condition "success or failure"
Feb  3 21:52:34.246: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1 container client-container: 
STEP: delete the pod
Feb  3 21:52:34.270: INFO: Waiting for pod downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1 to disappear
Feb  3 21:52:34.273: INFO: Pod downwardapi-volume-119d4f81-8a6e-4b06-8eb7-6889da1344f1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:34.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6706" for this suite.

• [SLOW TEST:6.547 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4144,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:34.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:52:34.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:52:36.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985954, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985954, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985954, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747985954, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:52:39.984: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:52:52.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4254" for this suite.
STEP: Destroying namespace "webhook-4254-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.995 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":248,"skipped":4189,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:52:52.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3746
STEP: creating replication controller nodeport-test in namespace services-3746
I0203 21:52:52.502587       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3746, replica count: 2
I0203 21:52:55.553005       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 21:52:58.553355       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 21:52:58.553: INFO: Creating new exec pod
Feb  3 21:53:03.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodb5p2q -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb  3 21:53:03.826: INFO: stderr: "I0203 21:53:03.714487    3675 log.go:172] (0xc0004400b0) (0xc00080fb80) Create stream\nI0203 21:53:03.714526    3675 log.go:172] (0xc0004400b0) (0xc00080fb80) Stream added, broadcasting: 1\nI0203 21:53:03.716380    3675 log.go:172] (0xc0004400b0) Reply frame received for 1\nI0203 21:53:03.716423    3675 log.go:172] (0xc0004400b0) (0xc0008e0000) Create stream\nI0203 21:53:03.716438    3675 log.go:172] (0xc0004400b0) (0xc0008e0000) Stream added, broadcasting: 3\nI0203 21:53:03.717460    3675 log.go:172] (0xc0004400b0) Reply frame received for 3\nI0203 21:53:03.717495    3675 log.go:172] (0xc0004400b0) (0xc0008e00a0) Create stream\nI0203 21:53:03.717509    3675 log.go:172] (0xc0004400b0) (0xc0008e00a0) Stream added, broadcasting: 5\nI0203 21:53:03.718326    3675 log.go:172] (0xc0004400b0) Reply frame received for 5\nI0203 21:53:03.818049    3675 log.go:172] (0xc0004400b0) Data frame received for 5\nI0203 21:53:03.818079    3675 log.go:172] (0xc0008e00a0) (5) Data frame handling\nI0203 21:53:03.818107    3675 log.go:172] (0xc0008e00a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0203 21:53:03.818449    3675 log.go:172] (0xc0004400b0) Data frame received for 5\nI0203 21:53:03.818487    3675 log.go:172] (0xc0008e00a0) (5) Data frame handling\nI0203 21:53:03.818512    3675 log.go:172] (0xc0008e00a0) (5) Data frame sent\nI0203 21:53:03.818532    3675 log.go:172] (0xc0004400b0) Data frame received for 5\nI0203 21:53:03.818555    3675 log.go:172] (0xc0008e00a0) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0203 21:53:03.818615    3675 log.go:172] (0xc0004400b0) Data frame received for 3\nI0203 21:53:03.818633    3675 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0203 21:53:03.820616    3675 log.go:172] (0xc0004400b0) Data frame received for 1\nI0203 21:53:03.820639    3675 log.go:172] (0xc00080fb80) (1) Data frame handling\nI0203 21:53:03.820661    3675 log.go:172] (0xc00080fb80) (1) Data frame sent\nI0203 21:53:03.820690    3675 log.go:172] (0xc0004400b0) (0xc00080fb80) Stream removed, broadcasting: 1\nI0203 21:53:03.821133    3675 log.go:172] (0xc0004400b0) Go away received\nI0203 21:53:03.821302    3675 log.go:172] (0xc0004400b0) (0xc00080fb80) Stream removed, broadcasting: 1\nI0203 21:53:03.821325    3675 log.go:172] (0xc0004400b0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0203 21:53:03.821335    3675 log.go:172] (0xc0004400b0) (0xc0008e00a0) Stream removed, broadcasting: 5\n"
Feb  3 21:53:03.826: INFO: stdout: ""
Feb  3 21:53:03.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodb5p2q -- /bin/sh -x -c nc -zv -t -w 2 10.96.251.182 80'
Feb  3 21:53:04.038: INFO: stderr: "I0203 21:53:03.957369    3699 log.go:172] (0xc000580e70) (0xc000938140) Create stream\nI0203 21:53:03.957422    3699 log.go:172] (0xc000580e70) (0xc000938140) Stream added, broadcasting: 1\nI0203 21:53:03.959797    3699 log.go:172] (0xc000580e70) Reply frame received for 1\nI0203 21:53:03.959835    3699 log.go:172] (0xc000580e70) (0xc000938280) Create stream\nI0203 21:53:03.959847    3699 log.go:172] (0xc000580e70) (0xc000938280) Stream added, broadcasting: 3\nI0203 21:53:03.960676    3699 log.go:172] (0xc000580e70) Reply frame received for 3\nI0203 21:53:03.960730    3699 log.go:172] (0xc000580e70) (0xc000720000) Create stream\nI0203 21:53:03.960759    3699 log.go:172] (0xc000580e70) (0xc000720000) Stream added, broadcasting: 5\nI0203 21:53:03.961753    3699 log.go:172] (0xc000580e70) Reply frame received for 5\nI0203 21:53:04.031674    3699 log.go:172] (0xc000580e70) Data frame received for 5\nI0203 21:53:04.031730    3699 log.go:172] (0xc000720000) (5) Data frame handling\nI0203 21:53:04.031761    3699 log.go:172] (0xc000720000) (5) Data frame sent\nI0203 21:53:04.031776    3699 log.go:172] (0xc000580e70) Data frame received for 5\n+ nc -zv -t -w 2 10.96.251.182 80\nI0203 21:53:04.031803    3699 log.go:172] (0xc000580e70) Data frame received for 3\nI0203 21:53:04.031846    3699 log.go:172] (0xc000938280) (3) Data frame handling\nI0203 21:53:04.031886    3699 log.go:172] (0xc000720000) (5) Data frame handling\nI0203 21:53:04.031925    3699 log.go:172] (0xc000720000) (5) Data frame sent\nI0203 21:53:04.031941    3699 log.go:172] (0xc000580e70) Data frame received for 5\nI0203 21:53:04.031953    3699 log.go:172] (0xc000720000) (5) Data frame handling\nConnection to 10.96.251.182 80 port [tcp/http] succeeded!\nI0203 21:53:04.033168    3699 log.go:172] (0xc000580e70) Data frame received for 1\nI0203 21:53:04.033189    3699 log.go:172] (0xc000938140) (1) Data frame handling\nI0203 21:53:04.033201    3699 log.go:172] (0xc000938140) (1) Data frame sent\nI0203 21:53:04.033211    3699 log.go:172] (0xc000580e70) (0xc000938140) Stream removed, broadcasting: 1\nI0203 21:53:04.033290    3699 log.go:172] (0xc000580e70) Go away received\nI0203 21:53:04.033485    3699 log.go:172] (0xc000580e70) (0xc000938140) Stream removed, broadcasting: 1\nI0203 21:53:04.033496    3699 log.go:172] (0xc000580e70) (0xc000938280) Stream removed, broadcasting: 3\nI0203 21:53:04.033503    3699 log.go:172] (0xc000580e70) (0xc000720000) Stream removed, broadcasting: 5\n"
Feb  3 21:53:04.038: INFO: stdout: ""
Feb  3 21:53:04.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodb5p2q -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31105'
Feb  3 21:53:04.244: INFO: stderr: "I0203 21:53:04.163432    3721 log.go:172] (0xc0000f6dc0) (0xc000701b80) Create stream\nI0203 21:53:04.163471    3721 log.go:172] (0xc0000f6dc0) (0xc000701b80) Stream added, broadcasting: 1\nI0203 21:53:04.166325    3721 log.go:172] (0xc0000f6dc0) Reply frame received for 1\nI0203 21:53:04.166380    3721 log.go:172] (0xc0000f6dc0) (0xc000990000) Create stream\nI0203 21:53:04.166403    3721 log.go:172] (0xc0000f6dc0) (0xc000990000) Stream added, broadcasting: 3\nI0203 21:53:04.167513    3721 log.go:172] (0xc0000f6dc0) Reply frame received for 3\nI0203 21:53:04.167555    3721 log.go:172] (0xc0000f6dc0) (0xc0009da000) Create stream\nI0203 21:53:04.167572    3721 log.go:172] (0xc0000f6dc0) (0xc0009da000) Stream added, broadcasting: 5\nI0203 21:53:04.168466    3721 log.go:172] (0xc0000f6dc0) Reply frame received for 5\nI0203 21:53:04.236760    3721 log.go:172] (0xc0000f6dc0) Data frame received for 5\nI0203 21:53:04.236800    3721 log.go:172] (0xc0009da000) (5) Data frame handling\nI0203 21:53:04.236827    3721 log.go:172] (0xc0009da000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31105\nConnection to 172.18.0.6 31105 port [tcp/31105] succeeded!\nI0203 21:53:04.236923    3721 log.go:172] (0xc0000f6dc0) Data frame received for 5\nI0203 21:53:04.236961    3721 log.go:172] (0xc0009da000) (5) Data frame handling\nI0203 21:53:04.237126    3721 log.go:172] (0xc0000f6dc0) Data frame received for 3\nI0203 21:53:04.237153    3721 log.go:172] (0xc000990000) (3) Data frame handling\nI0203 21:53:04.240335    3721 log.go:172] (0xc0000f6dc0) Data frame received for 1\nI0203 21:53:04.240368    3721 log.go:172] (0xc000701b80) (1) Data frame handling\nI0203 21:53:04.240386    3721 log.go:172] (0xc000701b80) (1) Data frame sent\nI0203 21:53:04.240407    3721 log.go:172] (0xc0000f6dc0) (0xc000701b80) Stream removed, broadcasting: 1\nI0203 21:53:04.240431    3721 log.go:172] (0xc0000f6dc0) Go away received\nI0203 21:53:04.240682    3721 log.go:172] (0xc0000f6dc0) (0xc000701b80) Stream removed, broadcasting: 1\nI0203 21:53:04.240700    3721 log.go:172] (0xc0000f6dc0) (0xc000990000) Stream removed, broadcasting: 3\nI0203 21:53:04.240708    3721 log.go:172] (0xc0000f6dc0) (0xc0009da000) Stream removed, broadcasting: 5\n"
Feb  3 21:53:04.244: INFO: stdout: ""
Feb  3 21:53:04.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3746 execpodb5p2q -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31105'
Feb  3 21:53:04.450: INFO: stderr: "I0203 21:53:04.373363    3744 log.go:172] (0xc000105130) (0xc0008f2000) Create stream\nI0203 21:53:04.373431    3744 log.go:172] (0xc000105130) (0xc0008f2000) Stream added, broadcasting: 1\nI0203 21:53:04.375960    3744 log.go:172] (0xc000105130) Reply frame received for 1\nI0203 21:53:04.376024    3744 log.go:172] (0xc000105130) (0xc0008f20a0) Create stream\nI0203 21:53:04.376052    3744 log.go:172] (0xc000105130) (0xc0008f20a0) Stream added, broadcasting: 3\nI0203 21:53:04.377145    3744 log.go:172] (0xc000105130) Reply frame received for 3\nI0203 21:53:04.377187    3744 log.go:172] (0xc000105130) (0xc0006adae0) Create stream\nI0203 21:53:04.377199    3744 log.go:172] (0xc000105130) (0xc0006adae0) Stream added, broadcasting: 5\nI0203 21:53:04.377922    3744 log.go:172] (0xc000105130) Reply frame received for 5\nI0203 21:53:04.442912    3744 log.go:172] (0xc000105130) Data frame received for 3\nI0203 21:53:04.442960    3744 log.go:172] (0xc0008f20a0) (3) Data frame handling\nI0203 21:53:04.442993    3744 log.go:172] (0xc000105130) Data frame received for 5\nI0203 21:53:04.443024    3744 log.go:172] (0xc0006adae0) (5) Data frame handling\nI0203 21:53:04.443039    3744 log.go:172] (0xc0006adae0) (5) Data frame sent\nI0203 21:53:04.443051    3744 log.go:172] (0xc000105130) Data frame received for 5\nI0203 21:53:04.443062    3744 log.go:172] (0xc0006adae0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.5 31105\nConnection to 172.18.0.5 31105 port [tcp/31105] succeeded!\nI0203 21:53:04.445107    3744 log.go:172] (0xc000105130) Data frame received for 1\nI0203 21:53:04.445126    3744 log.go:172] (0xc0008f2000) (1) Data frame handling\nI0203 21:53:04.445137    3744 log.go:172] (0xc0008f2000) (1) Data frame sent\nI0203 21:53:04.445150    3744 log.go:172] (0xc000105130) (0xc0008f2000) Stream removed, broadcasting: 1\nI0203 21:53:04.445263    3744 log.go:172] (0xc000105130) Go away received\nI0203 21:53:04.445568    3744 log.go:172] (0xc000105130) (0xc0008f2000) Stream removed, broadcasting: 1\nI0203 21:53:04.445591    3744 log.go:172] (0xc000105130) (0xc0008f20a0) Stream removed, broadcasting: 3\nI0203 21:53:04.445602    3744 log.go:172] (0xc000105130) (0xc0006adae0) Stream removed, broadcasting: 5\n"
Feb  3 21:53:04.450: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:04.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3746" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.182 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":249,"skipped":4208,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:04.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:53:04.569: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  3 21:53:06.635: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8146" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":250,"skipped":4209,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:07.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:14.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-194" for this suite.

• [SLOW TEST:6.554 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4221,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:14.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:53:14.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a" in namespace "downward-api-3244" to be "success or failure"
Feb  3 21:53:14.556: INFO: Pod "downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.128194ms
Feb  3 21:53:16.630: INFO: Pod "downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08275741s
Feb  3 21:53:18.677: INFO: Pod "downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130033379s
STEP: Saw pod success
Feb  3 21:53:18.677: INFO: Pod "downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a" satisfied condition "success or failure"
Feb  3 21:53:18.680: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a container client-container: 
STEP: delete the pod
Feb  3 21:53:19.170: INFO: Waiting for pod downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a to disappear
Feb  3 21:53:19.194: INFO: Pod downwardapi-volume-97fc4aac-3113-4b01-9b3d-4a4628426e4a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:19.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3244" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4234,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:19.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  3 21:53:23.469: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:23.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-493" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4245,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:23.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  3 21:53:28.119: INFO: Successfully updated pod "pod-update-7c36d777-64c3-428f-9c99-6dd8fcccd6d2"
STEP: verifying the updated pod is in kubernetes
Feb  3 21:53:28.161: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:28.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2700" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4255,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:28.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6226" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4268,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:32.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 21:53:32.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453" in namespace "projected-2769" to be "success or failure"
Feb  3 21:53:32.486: INFO: Pod "downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453": Phase="Pending", Reason="", readiness=false. Elapsed: 9.409153ms
Feb  3 21:53:34.642: INFO: Pod "downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165856985s
Feb  3 21:53:36.654: INFO: Pod "downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177510912s
STEP: Saw pod success
Feb  3 21:53:36.654: INFO: Pod "downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453" satisfied condition "success or failure"
Feb  3 21:53:36.657: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453 container client-container: 
STEP: delete the pod
Feb  3 21:53:36.679: INFO: Waiting for pod downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453 to disappear
Feb  3 21:53:36.683: INFO: Pod downwardapi-volume-1b8828c3-def1-4761-b6e7-3624eefdf453 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:36.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2769" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4286,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:36.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  3 21:53:36.953: INFO: Waiting up to 5m0s for pod "pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55" in namespace "emptydir-2186" to be "success or failure"
Feb  3 21:53:36.964: INFO: Pod "pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55": Phase="Pending", Reason="", readiness=false. Elapsed: 11.001409ms
Feb  3 21:53:38.968: INFO: Pod "pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015002213s
Feb  3 21:53:40.977: INFO: Pod "pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023816358s
STEP: Saw pod success
Feb  3 21:53:40.977: INFO: Pod "pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55" satisfied condition "success or failure"
Feb  3 21:53:40.980: INFO: Trying to get logs from node jerma-worker2 pod pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55 container test-container: 
STEP: delete the pod
Feb  3 21:53:41.012: INFO: Waiting for pod pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55 to disappear
Feb  3 21:53:41.033: INFO: Pod pod-ee9ec0e2-0a2b-487e-9477-38d71ee00e55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:53:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2186" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4294,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:53:41.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-81f91289-d972-4c88-9b48-2837c8e44375 in namespace container-probe-2883
Feb  3 21:53:45.166: INFO: Started pod busybox-81f91289-d972-4c88-9b48-2837c8e44375 in namespace container-probe-2883
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 21:53:45.168: INFO: Initial restart count of pod busybox-81f91289-d972-4c88-9b48-2837c8e44375 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2883" for this suite.

• [SLOW TEST:244.720 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4299,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:45.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:45.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3984" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":259,"skipped":4305,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:45.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  3 21:57:45.937: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-523 /api/v1/namespaces/watch-523/configmaps/e2e-watch-test-watch-closed dee73cc9-fb53-40bd-9456-4c47e14e6b8c 6399214 0 2021-02-03 21:57:45 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 21:57:45.937: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-523 /api/v1/namespaces/watch-523/configmaps/e2e-watch-test-watch-closed dee73cc9-fb53-40bd-9456-4c47e14e6b8c 6399215 0 2021-02-03 21:57:45 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  3 21:57:45.954: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-523 /api/v1/namespaces/watch-523/configmaps/e2e-watch-test-watch-closed dee73cc9-fb53-40bd-9456-4c47e14e6b8c 6399216 0 2021-02-03 21:57:45 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 21:57:45.954: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-523 /api/v1/namespaces/watch-523/configmaps/e2e-watch-test-watch-closed dee73cc9-fb53-40bd-9456-4c47e14e6b8c 6399217 0 2021-02-03 21:57:45 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:45.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-523" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":260,"skipped":4318,"failed":0}
S
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:45.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb  3 21:57:52.542: INFO: Successfully updated pod "adopt-release-lcs4c"
STEP: Checking that the Job readopts the Pod
Feb  3 21:57:52.542: INFO: Waiting up to 15m0s for pod "adopt-release-lcs4c" in namespace "job-5525" to be "adopted"
Feb  3 21:57:52.546: INFO: Pod "adopt-release-lcs4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.143159ms
Feb  3 21:57:54.550: INFO: Pod "adopt-release-lcs4c": Phase="Running", Reason="", readiness=true. Elapsed: 2.007930395s
Feb  3 21:57:54.550: INFO: Pod "adopt-release-lcs4c" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb  3 21:57:55.058: INFO: Successfully updated pod "adopt-release-lcs4c"
STEP: Checking that the Job releases the Pod
Feb  3 21:57:55.058: INFO: Waiting up to 15m0s for pod "adopt-release-lcs4c" in namespace "job-5525" to be "released"
Feb  3 21:57:55.087: INFO: Pod "adopt-release-lcs4c": Phase="Running", Reason="", readiness=true. Elapsed: 28.638656ms
Feb  3 21:57:57.090: INFO: Pod "adopt-release-lcs4c": Phase="Running", Reason="", readiness=true. Elapsed: 2.03181978s
Feb  3 21:57:57.090: INFO: Pod "adopt-release-lcs4c" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:57:57.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5525" for this suite.

• [SLOW TEST:11.135 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":261,"skipped":4319,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:57:57.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:57:57.977: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:58:00.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986278, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986278, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986278, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986277, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:58:03.211: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb  3 21:58:07.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4769 to-be-attached-pod -i -c=container1'
Feb  3 21:58:07.356: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:07.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4769" for this suite.
STEP: Destroying namespace "webhook-4769-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.401 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":262,"skipped":4331,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:07.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 21:58:07.597: INFO: Waiting up to 5m0s for pod "pod-3c0c4549-d28a-4755-ac13-b5254516190c" in namespace "emptydir-7752" to be "success or failure"
Feb  3 21:58:07.600: INFO: Pod "pod-3c0c4549-d28a-4755-ac13-b5254516190c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.626875ms
Feb  3 21:58:09.605: INFO: Pod "pod-3c0c4549-d28a-4755-ac13-b5254516190c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008046337s
Feb  3 21:58:11.609: INFO: Pod "pod-3c0c4549-d28a-4755-ac13-b5254516190c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011826693s
STEP: Saw pod success
Feb  3 21:58:11.609: INFO: Pod "pod-3c0c4549-d28a-4755-ac13-b5254516190c" satisfied condition "success or failure"
Feb  3 21:58:11.611: INFO: Trying to get logs from node jerma-worker2 pod pod-3c0c4549-d28a-4755-ac13-b5254516190c container test-container: 
STEP: delete the pod
Feb  3 21:58:11.656: INFO: Waiting for pod pod-3c0c4549-d28a-4755-ac13-b5254516190c to disappear
Feb  3 21:58:11.672: INFO: Pod pod-3c0c4549-d28a-4755-ac13-b5254516190c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:11.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7752" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4337,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:11.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:58:12.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:58:14.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986292, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986292, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986292, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986292, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:58:17.863: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:17.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8580" for this suite.
STEP: Destroying namespace "webhook-8580-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.424 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":264,"skipped":4338,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:18.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  3 21:58:19.511: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  3 21:58:21.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 21:58:23.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747986299, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  3 21:58:26.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:27.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3639" for this suite.
STEP: Destroying namespace "webhook-3639-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.226 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":265,"skipped":4343,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:27.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 21:58:27.447: INFO: Waiting up to 5m0s for pod "downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd" in namespace "downward-api-6759" to be "success or failure"
Feb  3 21:58:27.463: INFO: Pod "downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.54039ms
Feb  3 21:58:29.467: INFO: Pod "downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020222871s
Feb  3 21:58:31.472: INFO: Pod "downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024411508s
STEP: Saw pod success
Feb  3 21:58:31.472: INFO: Pod "downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd" satisfied condition "success or failure"
Feb  3 21:58:31.474: INFO: Trying to get logs from node jerma-worker2 pod downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd container dapi-container: 
STEP: delete the pod
Feb  3 21:58:31.539: INFO: Waiting for pod downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd to disappear
Feb  3 21:58:31.628: INFO: Pod downward-api-80a04f2c-9865-40be-8e9f-c517af77eecd no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:58:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6759" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4357,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:58:31.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 21:58:31.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb  3 21:58:32.265: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:32Z generation:1 name:name1 resourceVersion:6399693 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b4184192-f7a9-474e-84b4-2352e1d80e1b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb  3 21:58:42.270: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:42Z generation:1 name:name2 resourceVersion:6399758 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c3bba7be-ef3e-433d-8116-1d35671ebced] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb  3 21:58:52.277: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:32Z generation:2 name:name1 resourceVersion:6399792 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b4184192-f7a9-474e-84b4-2352e1d80e1b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb  3 21:59:02.282: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:42Z generation:2 name:name2 resourceVersion:6399823 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c3bba7be-ef3e-433d-8116-1d35671ebced] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb  3 21:59:12.290: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:32Z generation:2 name:name1 resourceVersion:6399853 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:b4184192-f7a9-474e-84b4-2352e1d80e1b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb  3 21:59:22.297: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-03T21:58:42Z generation:2 name:name2 resourceVersion:6399883 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c3bba7be-ef3e-433d-8116-1d35671ebced] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 21:59:32.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5999" for this suite.

• [SLOW TEST:61.171 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":267,"skipped":4370,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 21:59:32.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0203 22:00:03.459123       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 22:00:03.459: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:03.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5325" for this suite.

• [SLOW TEST:30.652 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":268,"skipped":4381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:03.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-f119294d-b4c2-4878-b8e5-53222aad566d
STEP: Creating configMap with name cm-test-opt-upd-a510085b-4ece-48a7-8143-a615e1d78ee8
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f119294d-b4c2-4878-b8e5-53222aad566d
STEP: Updating configmap cm-test-opt-upd-a510085b-4ece-48a7-8143-a615e1d78ee8
STEP: Creating configMap with name cm-test-opt-create-84cf49de-10d0-49eb-855d-3d362db929fb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:11.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7539" for this suite.

• [SLOW TEST:8.371 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4403,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:11.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Feb  3 22:00:11.910: INFO: Waiting up to 5m0s for pod "var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893" in namespace "var-expansion-2409" to be "success or failure"
Feb  3 22:00:11.914: INFO: Pod "var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155834ms
Feb  3 22:00:13.918: INFO: Pod "var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007508633s
Feb  3 22:00:15.922: INFO: Pod "var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011831688s
STEP: Saw pod success
Feb  3 22:00:15.922: INFO: Pod "var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893" satisfied condition "success or failure"
Feb  3 22:00:15.925: INFO: Trying to get logs from node jerma-worker pod var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893 container dapi-container: 
STEP: delete the pod
Feb  3 22:00:15.972: INFO: Waiting for pod var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893 to disappear
Feb  3 22:00:15.986: INFO: Pod var-expansion-af9d38d7-63af-48a0-bb41-8a268c10a893 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:15.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2409" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4404,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:15.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7050
[It] should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-7050
Feb  3 22:00:16.099: INFO: Found 0 stateful pods, waiting for 1
Feb  3 22:00:26.113: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:00:26.154: INFO: Deleting all statefulset in ns statefulset-7050
Feb  3 22:00:26.160: INFO: Scaling statefulset ss to 0
Feb  3 22:00:46.218: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:00:46.221: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:00:46.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7050" for this suite.

• [SLOW TEST:30.255 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":271,"skipped":4414,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:00:46.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  3 22:00:46.339: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 22:00:46.380: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:46.382: INFO: Number of nodes with available pods: 0
Feb  3 22:00:46.382: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:00:47.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:47.391: INFO: Number of nodes with available pods: 0
Feb  3 22:00:47.391: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:00:48.516: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:48.519: INFO: Number of nodes with available pods: 0
Feb  3 22:00:48.519: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:00:49.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:49.390: INFO: Number of nodes with available pods: 0
Feb  3 22:00:49.390: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:00:50.388: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:50.392: INFO: Number of nodes with available pods: 0
Feb  3 22:00:50.392: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:00:51.389: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:51.406: INFO: Number of nodes with available pods: 2
Feb  3 22:00:51.406: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  3 22:00:51.460: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:51.460: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:51.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:52.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:52.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:52.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:53.504: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:53.504: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:53.507: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:54.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:54.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:54.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:55.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:55.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:55.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:00:55.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:56.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:56.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:56.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:00:56.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:57.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:57.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:57.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:00:57.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:58.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:58.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:58.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:00:58.510: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:00:59.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:59.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:00:59.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:00:59.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:00.507: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:00.507: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:00.507: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:01:00.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:01.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:01.505: INFO: Wrong image for pod: daemon-set-pxh45. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:01.505: INFO: Pod daemon-set-pxh45 is not available
Feb  3 22:01:01.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:02.505: INFO: Pod daemon-set-2pmxs is not available
Feb  3 22:01:02.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:02.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:03.505: INFO: Pod daemon-set-2pmxs is not available
Feb  3 22:01:03.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:03.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:04.504: INFO: Pod daemon-set-2pmxs is not available
Feb  3 22:01:04.504: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:04.507: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:05.505: INFO: Pod daemon-set-2pmxs is not available
Feb  3 22:01:05.505: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:05.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:06.536: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:06.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:07.504: INFO: Wrong image for pod: daemon-set-l7lbw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb  3 22:01:07.504: INFO: Pod daemon-set-l7lbw is not available
Feb  3 22:01:07.508: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:08.504: INFO: Pod daemon-set-lq4dt is not available
Feb  3 22:01:08.507: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  3 22:01:08.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:08.512: INFO: Number of nodes with available pods: 1
Feb  3 22:01:08.512: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:01:09.518: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:09.521: INFO: Number of nodes with available pods: 1
Feb  3 22:01:09.521: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:01:10.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:10.521: INFO: Number of nodes with available pods: 1
Feb  3 22:01:10.521: INFO: Node jerma-worker is running more than one daemon pod
Feb  3 22:01:11.518: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb  3 22:01:11.521: INFO: Number of nodes with available pods: 2
Feb  3 22:01:11.521: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3641, will wait for the garbage collector to delete the pods
Feb  3 22:01:11.609: INFO: Deleting DaemonSet.extensions daemon-set took: 20.296796ms
Feb  3 22:01:12.009: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.24348ms
Feb  3 22:01:21.312: INFO: Number of nodes with available pods: 0
Feb  3 22:01:21.312: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 22:01:21.315: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3641/daemonsets","resourceVersion":"6400518"},"items":null}

Feb  3 22:01:21.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3641/pods","resourceVersion":"6400518"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:21.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3641" for this suite.

• [SLOW TEST:35.089 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":272,"skipped":4447,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:21.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  3 22:01:21.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28" in namespace "projected-2606" to be "success or failure"
Feb  3 22:01:21.470: INFO: Pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28": Phase="Pending", Reason="", readiness=false. Elapsed: 31.24485ms
Feb  3 22:01:23.547: INFO: Pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107890723s
Feb  3 22:01:25.565: INFO: Pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28": Phase="Running", Reason="", readiness=true. Elapsed: 4.125871425s
Feb  3 22:01:27.569: INFO: Pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12957512s
STEP: Saw pod success
Feb  3 22:01:27.569: INFO: Pod "downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28" satisfied condition "success or failure"
Feb  3 22:01:27.571: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28 container client-container: 
STEP: delete the pod
Feb  3 22:01:27.594: INFO: Waiting for pod downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28 to disappear
Feb  3 22:01:27.598: INFO: Pod downwardapi-volume-a9740a09-c3b4-4e59-97ef-4bd04c0e6f28 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:27.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2606" for this suite.

• [SLOW TEST:6.292 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4471,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:27.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb  3 22:01:27.690: INFO: namespace kubectl-4674
Feb  3 22:01:27.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4674'
Feb  3 22:01:30.708: INFO: stderr: ""
Feb  3 22:01:30.708: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  3 22:01:31.712: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:01:31.712: INFO: Found 0 / 1
Feb  3 22:01:32.811: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:01:32.811: INFO: Found 0 / 1
Feb  3 22:01:33.715: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:01:33.715: INFO: Found 0 / 1
Feb  3 22:01:34.712: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:01:34.712: INFO: Found 1 / 1
Feb  3 22:01:34.712: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  3 22:01:34.715: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  3 22:01:34.715: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 22:01:34.715: INFO: wait on agnhost-master startup in kubectl-4674 
Feb  3 22:01:34.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-7bnxx agnhost-master --namespace=kubectl-4674'
Feb  3 22:01:34.831: INFO: stderr: ""
Feb  3 22:01:34.831: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb  3 22:01:34.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4674'
Feb  3 22:01:34.976: INFO: stderr: ""
Feb  3 22:01:34.976: INFO: stdout: "service/rm2 exposed\n"
Feb  3 22:01:34.982: INFO: Service rm2 in namespace kubectl-4674 found.
STEP: exposing service
Feb  3 22:01:36.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4674'
Feb  3 22:01:37.158: INFO: stderr: ""
Feb  3 22:01:37.158: INFO: stdout: "service/rm3 exposed\n"
Feb  3 22:01:37.162: INFO: Service rm3 in namespace kubectl-4674 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:39.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4674" for this suite.

• [SLOW TEST:11.547 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":274,"skipped":4472,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:39.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  3 22:01:39.281: INFO: Waiting up to 5m0s for pod "downward-api-87a16dca-a399-4b11-89b4-87a528592863" in namespace "downward-api-3187" to be "success or failure"
Feb  3 22:01:39.294: INFO: Pod "downward-api-87a16dca-a399-4b11-89b4-87a528592863": Phase="Pending", Reason="", readiness=false. Elapsed: 12.490586ms
Feb  3 22:01:41.297: INFO: Pod "downward-api-87a16dca-a399-4b11-89b4-87a528592863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015665523s
Feb  3 22:01:43.301: INFO: Pod "downward-api-87a16dca-a399-4b11-89b4-87a528592863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020116774s
STEP: Saw pod success
Feb  3 22:01:43.302: INFO: Pod "downward-api-87a16dca-a399-4b11-89b4-87a528592863" satisfied condition "success or failure"
Feb  3 22:01:43.305: INFO: Trying to get logs from node jerma-worker2 pod downward-api-87a16dca-a399-4b11-89b4-87a528592863 container dapi-container: 
STEP: delete the pod
Feb  3 22:01:43.322: INFO: Waiting for pod downward-api-87a16dca-a399-4b11-89b4-87a528592863 to disappear
Feb  3 22:01:43.326: INFO: Pod downward-api-87a16dca-a399-4b11-89b4-87a528592863 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:43.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3187" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4481,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:43.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  3 22:01:43.469: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:01:51.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2088" for this suite.

• [SLOW TEST:8.420 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":276,"skipped":4509,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:01:51.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6411
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6411
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6411
Feb  3 22:01:51.858: INFO: Found 0 stateful pods, waiting for 1
Feb  3 22:02:01.863: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  3 22:02:01.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:02:02.273: INFO: stderr: "I0203 22:02:02.157700    3889 log.go:172] (0xc0000f7760) (0xc000a70460) Create stream\nI0203 22:02:02.157786    3889 log.go:172] (0xc0000f7760) (0xc000a70460) Stream added, broadcasting: 1\nI0203 22:02:02.159695    3889 log.go:172] (0xc0000f7760) Reply frame received for 1\nI0203 22:02:02.159745    3889 log.go:172] (0xc0000f7760) (0xc000ade0a0) Create stream\nI0203 22:02:02.159768    3889 log.go:172] (0xc0000f7760) (0xc000ade0a0) Stream added, broadcasting: 3\nI0203 22:02:02.160811    3889 log.go:172] (0xc0000f7760) Reply frame received for 3\nI0203 22:02:02.160968    3889 log.go:172] (0xc0000f7760) (0xc000a70500) Create stream\nI0203 22:02:02.160990    3889 log.go:172] (0xc0000f7760) (0xc000a70500) Stream added, broadcasting: 5\nI0203 22:02:02.162072    3889 log.go:172] (0xc0000f7760) Reply frame received for 5\nI0203 22:02:02.238928    3889 log.go:172] (0xc0000f7760) Data frame received for 5\nI0203 22:02:02.238957    3889 log.go:172] (0xc000a70500) (5) Data frame handling\nI0203 22:02:02.238974    3889 log.go:172] (0xc000a70500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:02:02.264455    3889 log.go:172] (0xc0000f7760) Data frame received for 3\nI0203 22:02:02.264483    3889 log.go:172] (0xc000ade0a0) (3) Data frame handling\nI0203 22:02:02.264491    3889 log.go:172] (0xc000ade0a0) (3) Data frame sent\nI0203 22:02:02.264686    3889 log.go:172] (0xc0000f7760) Data frame received for 3\nI0203 22:02:02.264699    3889 log.go:172] (0xc000ade0a0) (3) Data frame handling\nI0203 22:02:02.264728    3889 log.go:172] (0xc0000f7760) Data frame received for 5\nI0203 22:02:02.264757    3889 log.go:172] (0xc000a70500) (5) Data frame handling\nI0203 22:02:02.266428    3889 log.go:172] (0xc0000f7760) Data frame received for 1\nI0203 22:02:02.266442    3889 log.go:172] (0xc000a70460) (1) Data frame handling\nI0203 22:02:02.266451    3889 log.go:172] (0xc000a70460) (1) Data frame sent\nI0203 22:02:02.266581    3889 log.go:172] (0xc0000f7760) (0xc000a70460) Stream removed, broadcasting: 1\nI0203 22:02:02.266711    3889 log.go:172] (0xc0000f7760) Go away received\nI0203 22:02:02.266833    3889 log.go:172] (0xc0000f7760) (0xc000a70460) Stream removed, broadcasting: 1\nI0203 22:02:02.266850    3889 log.go:172] (0xc0000f7760) (0xc000ade0a0) Stream removed, broadcasting: 3\nI0203 22:02:02.266863    3889 log.go:172] (0xc0000f7760) (0xc000a70500) Stream removed, broadcasting: 5\n"
Feb  3 22:02:02.273: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:02:02.273: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:02:02.276: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 22:02:12.281: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:02:12.281: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:02:12.338: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999468s
Feb  3 22:02:13.343: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.953502905s
Feb  3 22:02:14.347: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.949079191s
Feb  3 22:02:15.352: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.944547307s
Feb  3 22:02:16.356: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.939874238s
Feb  3 22:02:17.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.936045942s
Feb  3 22:02:18.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.932480802s
Feb  3 22:02:19.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.927789421s
Feb  3 22:02:20.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.923089892s
Feb  3 22:02:21.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.611791ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6411
Feb  3 22:02:22.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:02:22.617: INFO: stderr: "I0203 22:02:22.517067    3909 log.go:172] (0xc000b1c840) (0xc000721a40) Create stream\nI0203 22:02:22.517121    3909 log.go:172] (0xc000b1c840) (0xc000721a40) Stream added, broadcasting: 1\nI0203 22:02:22.519930    3909 log.go:172] (0xc000b1c840) Reply frame received for 1\nI0203 22:02:22.519990    3909 log.go:172] (0xc000b1c840) (0xc000942000) Create stream\nI0203 22:02:22.520009    3909 log.go:172] (0xc000b1c840) (0xc000942000) Stream added, broadcasting: 3\nI0203 22:02:22.521328    3909 log.go:172] (0xc000b1c840) Reply frame received for 3\nI0203 22:02:22.521354    3909 log.go:172] (0xc000b1c840) (0xc000721c20) Create stream\nI0203 22:02:22.521361    3909 log.go:172] (0xc000b1c840) (0xc000721c20) Stream added, broadcasting: 5\nI0203 22:02:22.522228    3909 log.go:172] (0xc000b1c840) Reply frame received for 5\nI0203 22:02:22.608817    3909 log.go:172] (0xc000b1c840) Data frame received for 3\nI0203 22:02:22.608914    3909 log.go:172] (0xc000942000) (3) Data frame handling\nI0203 22:02:22.608929    3909 log.go:172] (0xc000942000) (3) Data frame sent\nI0203 22:02:22.608941    3909 log.go:172] (0xc000b1c840) Data frame received for 3\nI0203 22:02:22.608947    3909 log.go:172] (0xc000942000) (3) Data frame handling\nI0203 22:02:22.609027    3909 log.go:172] (0xc000b1c840) Data frame received for 5\nI0203 22:02:22.609092    3909 log.go:172] (0xc000721c20) (5) Data frame handling\nI0203 22:02:22.609134    3909 log.go:172] (0xc000721c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:02:22.609160    3909 log.go:172] (0xc000b1c840) Data frame received for 5\nI0203 22:02:22.609177    3909 log.go:172] (0xc000721c20) (5) Data frame handling\nI0203 22:02:22.610754    3909 log.go:172] (0xc000b1c840) Data frame received for 1\nI0203 22:02:22.610788    3909 log.go:172] (0xc000721a40) (1) Data frame handling\nI0203 22:02:22.610815    3909 log.go:172] (0xc000721a40) (1) Data frame sent\nI0203 22:02:22.610842    3909 log.go:172] (0xc000b1c840) (0xc000721a40) Stream removed, broadcasting: 1\nI0203 22:02:22.610871    3909 log.go:172] (0xc000b1c840) Go away received\nI0203 22:02:22.611290    3909 log.go:172] (0xc000b1c840) (0xc000721a40) Stream removed, broadcasting: 1\nI0203 22:02:22.611313    3909 log.go:172] (0xc000b1c840) (0xc000942000) Stream removed, broadcasting: 3\nI0203 22:02:22.611325    3909 log.go:172] (0xc000b1c840) (0xc000721c20) Stream removed, broadcasting: 5\n"
Feb  3 22:02:22.617: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:02:22.617: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:02:22.621: INFO: Found 1 stateful pods, waiting for 3
Feb  3 22:02:32.625: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:02:32.625: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 22:02:32.625: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  3 22:02:32.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:02:32.854: INFO: stderr: "I0203 22:02:32.783222    3933 log.go:172] (0xc000986000) (0xc0006039a0) Create stream\nI0203 22:02:32.783286    3933 log.go:172] (0xc000986000) (0xc0006039a0) Stream added, broadcasting: 1\nI0203 22:02:32.786094    3933 log.go:172] (0xc000986000) Reply frame received for 1\nI0203 22:02:32.786152    3933 log.go:172] (0xc000986000) (0xc0007d6b40) Create stream\nI0203 22:02:32.786167    3933 log.go:172] (0xc000986000) (0xc0007d6b40) Stream added, broadcasting: 3\nI0203 22:02:32.787040    3933 log.go:172] (0xc000986000) Reply frame received for 3\nI0203 22:02:32.787086    3933 log.go:172] (0xc000986000) (0xc000b28000) Create stream\nI0203 22:02:32.787110    3933 log.go:172] (0xc000986000) (0xc000b28000) Stream added, broadcasting: 5\nI0203 22:02:32.787897    3933 log.go:172] (0xc000986000) Reply frame received for 5\nI0203 22:02:32.846886    3933 log.go:172] (0xc000986000) Data frame received for 3\nI0203 22:02:32.846918    3933 log.go:172] (0xc0007d6b40) (3) Data frame handling\nI0203 22:02:32.846947    3933 log.go:172] (0xc0007d6b40) (3) Data frame sent\nI0203 22:02:32.846957    3933 log.go:172] (0xc000986000) Data frame received for 3\nI0203 22:02:32.846963    3933 log.go:172] (0xc0007d6b40) (3) Data frame handling\nI0203 22:02:32.847045    3933 log.go:172] (0xc000986000) Data frame received for 5\nI0203 22:02:32.847088    3933 log.go:172] (0xc000b28000) (5) Data frame handling\nI0203 22:02:32.847123    3933 log.go:172] (0xc000b28000) (5) Data frame sent\nI0203 22:02:32.847140    3933 log.go:172] (0xc000986000) Data frame received for 5\nI0203 22:02:32.847155    3933 log.go:172] (0xc000b28000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:02:32.848830    3933 log.go:172] (0xc000986000) Data frame received for 1\nI0203 22:02:32.848994    3933 log.go:172] (0xc0006039a0) (1) Data frame handling\nI0203 22:02:32.849033    3933 log.go:172] (0xc0006039a0) (1) Data frame sent\nI0203 22:02:32.849060    3933 log.go:172] (0xc000986000) (0xc0006039a0) Stream removed, broadcasting: 1\nI0203 22:02:32.849088    3933 log.go:172] (0xc000986000) Go away received\nI0203 22:02:32.849368    3933 log.go:172] (0xc000986000) (0xc0006039a0) Stream removed, broadcasting: 1\nI0203 22:02:32.849385    3933 log.go:172] (0xc000986000) (0xc0007d6b40) Stream removed, broadcasting: 3\nI0203 22:02:32.849392    3933 log.go:172] (0xc000986000) (0xc000b28000) Stream removed, broadcasting: 5\n"
Feb  3 22:02:32.854: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:02:32.854: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:02:32.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:02:33.090: INFO: stderr: "I0203 22:02:32.985298    3953 log.go:172] (0xc0009c4000) (0xc00057c6e0) Create stream\nI0203 22:02:32.985362    3953 log.go:172] (0xc0009c4000) (0xc00057c6e0) Stream added, broadcasting: 1\nI0203 22:02:32.987866    3953 log.go:172] (0xc0009c4000) Reply frame received for 1\nI0203 22:02:32.987920    3953 log.go:172] (0xc0009c4000) (0xc0003934a0) Create stream\nI0203 22:02:32.987947    3953 log.go:172] (0xc0009c4000) (0xc0003934a0) Stream added, broadcasting: 3\nI0203 22:02:32.989425    3953 log.go:172] (0xc0009c4000) Reply frame received for 3\nI0203 22:02:32.989449    3953 log.go:172] (0xc0009c4000) (0xc000b7e000) Create stream\nI0203 22:02:32.989457    3953 log.go:172] (0xc0009c4000) (0xc000b7e000) Stream added, broadcasting: 5\nI0203 22:02:32.990580    3953 log.go:172] (0xc0009c4000) Reply frame received for 5\nI0203 22:02:33.059197    3953 log.go:172] (0xc0009c4000) Data frame received for 5\nI0203 22:02:33.059229    3953 log.go:172] (0xc000b7e000) (5) Data frame handling\nI0203 22:02:33.059251    3953 log.go:172] (0xc000b7e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:02:33.083495    3953 log.go:172] (0xc0009c4000) Data frame received for 5\nI0203 22:02:33.083539    3953 log.go:172] (0xc000b7e000) (5) Data frame handling\nI0203 22:02:33.083563    3953 log.go:172] (0xc0009c4000) Data frame received for 3\nI0203 22:02:33.083576    3953 log.go:172] (0xc0003934a0) (3) Data frame handling\nI0203 22:02:33.083604    3953 log.go:172] (0xc0003934a0) (3) Data frame sent\nI0203 22:02:33.083616    3953 log.go:172] (0xc0009c4000) Data frame received for 3\nI0203 22:02:33.083624    3953 log.go:172] (0xc0003934a0) (3) Data frame handling\nI0203 22:02:33.085049    3953 log.go:172] (0xc0009c4000) Data frame received for 1\nI0203 22:02:33.085087    3953 log.go:172] (0xc00057c6e0) (1) Data frame handling\nI0203 22:02:33.085103    3953 log.go:172] (0xc00057c6e0) (1) Data frame sent\nI0203 22:02:33.085116    3953 log.go:172] (0xc0009c4000) (0xc00057c6e0) Stream removed, broadcasting: 1\nI0203 22:02:33.085135    3953 log.go:172] (0xc0009c4000) Go away received\nI0203 22:02:33.085480    3953 log.go:172] (0xc0009c4000) (0xc00057c6e0) Stream removed, broadcasting: 1\nI0203 22:02:33.085498    3953 log.go:172] (0xc0009c4000) (0xc0003934a0) Stream removed, broadcasting: 3\nI0203 22:02:33.085510    3953 log.go:172] (0xc0009c4000) (0xc000b7e000) Stream removed, broadcasting: 5\n"
Feb  3 22:02:33.090: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:02:33.090: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:02:33.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb  3 22:02:33.390: INFO: stderr: "I0203 22:02:33.276191    3973 log.go:172] (0xc0009b84d0) (0xc0003314a0) Create stream\nI0203 22:02:33.276247    3973 log.go:172] (0xc0009b84d0) (0xc0003314a0) Stream added, broadcasting: 1\nI0203 22:02:33.279021    3973 log.go:172] (0xc0009b84d0) Reply frame received for 1\nI0203 22:02:33.279072    3973 log.go:172] (0xc0009b84d0) (0xc0009ae000) Create stream\nI0203 22:02:33.279086    3973 log.go:172] (0xc0009b84d0) (0xc0009ae000) Stream added, broadcasting: 3\nI0203 22:02:33.280146    3973 log.go:172] (0xc0009b84d0) Reply frame received for 3\nI0203 22:02:33.280189    3973 log.go:172] (0xc0009b84d0) (0xc0009ae0a0) Create stream\nI0203 22:02:33.280218    3973 log.go:172] (0xc0009b84d0) (0xc0009ae0a0) Stream added, broadcasting: 5\nI0203 22:02:33.281245    3973 log.go:172] (0xc0009b84d0) Reply frame received for 5\nI0203 22:02:33.341469    3973 log.go:172] (0xc0009b84d0) Data frame received for 5\nI0203 22:02:33.341509    3973 log.go:172] (0xc0009ae0a0) (5) Data frame handling\nI0203 22:02:33.341531    3973 log.go:172] (0xc0009ae0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 22:02:33.382369    3973 log.go:172] (0xc0009b84d0) Data frame received for 3\nI0203 22:02:33.382408    3973 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0203 22:02:33.382438    3973 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0203 22:02:33.382644    3973 log.go:172] (0xc0009b84d0) Data frame received for 3\nI0203 22:02:33.382670    3973 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0203 22:02:33.382806    3973 log.go:172] (0xc0009b84d0) Data frame received for 5\nI0203 22:02:33.382839    3973 log.go:172] (0xc0009ae0a0) (5) Data frame handling\nI0203 22:02:33.385002    3973 log.go:172] (0xc0009b84d0) Data frame received for 1\nI0203 22:02:33.385041    3973 log.go:172] (0xc0003314a0) (1) Data frame handling\nI0203 22:02:33.385064    3973 log.go:172] (0xc0003314a0) (1) Data frame sent\nI0203 22:02:33.385080    3973 log.go:172] (0xc0009b84d0) (0xc0003314a0) Stream removed, broadcasting: 1\nI0203 22:02:33.385095    3973 log.go:172] (0xc0009b84d0) Go away received\nI0203 22:02:33.385785    3973 log.go:172] (0xc0009b84d0) (0xc0003314a0) Stream removed, broadcasting: 1\nI0203 22:02:33.385810    3973 log.go:172] (0xc0009b84d0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0203 22:02:33.385825    3973 log.go:172] (0xc0009b84d0) (0xc0009ae0a0) Stream removed, broadcasting: 5\n"
Feb  3 22:02:33.390: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb  3 22:02:33.390: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb  3 22:02:33.390: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:02:33.398: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  3 22:02:43.425: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:02:43.425: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:02:43.425: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 22:02:43.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999372s
Feb  3 22:02:44.502: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975886477s
Feb  3 22:02:45.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961970399s
Feb  3 22:02:46.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.957545335s
Feb  3 22:02:47.518: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95320581s
Feb  3 22:02:48.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.945485051s
Feb  3 22:02:49.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940684523s
Feb  3 22:02:50.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.933552308s
Feb  3 22:02:51.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.928999628s
Feb  3 22:02:52.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 921.403325ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6411
Feb  3 22:02:53.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:02:53.758: INFO: stderr: "I0203 22:02:53.672083    3994 log.go:172] (0xc000944000) (0xc000699cc0) Create stream\nI0203 22:02:53.672137    3994 log.go:172] (0xc000944000) (0xc000699cc0) Stream added, broadcasting: 1\nI0203 22:02:53.674620    3994 log.go:172] (0xc000944000) Reply frame received for 1\nI0203 22:02:53.674678    3994 log.go:172] (0xc000944000) (0xc000616780) Create stream\nI0203 22:02:53.674701    3994 log.go:172] (0xc000944000) (0xc000616780) Stream added, broadcasting: 3\nI0203 22:02:53.675481    3994 log.go:172] (0xc000944000) Reply frame received for 3\nI0203 22:02:53.675527    3994 log.go:172] (0xc000944000) (0xc0002fb540) Create stream\nI0203 22:02:53.675549    3994 log.go:172] (0xc000944000) (0xc0002fb540) Stream added, broadcasting: 5\nI0203 22:02:53.676367    3994 log.go:172] (0xc000944000) Reply frame received for 5\nI0203 22:02:53.751049    3994 log.go:172] (0xc000944000) Data frame received for 5\nI0203 22:02:53.751086    3994 log.go:172] (0xc0002fb540) (5) Data frame handling\nI0203 22:02:53.751099    3994 log.go:172] (0xc0002fb540) (5) Data frame sent\nI0203 22:02:53.751107    3994 log.go:172] (0xc000944000) Data frame received for 5\nI0203 22:02:53.751113    3994 log.go:172] (0xc0002fb540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:02:53.751136    3994 log.go:172] (0xc000944000) Data frame received for 3\nI0203 22:02:53.751148    3994 log.go:172] (0xc000616780) (3) Data frame handling\nI0203 22:02:53.751161    3994 log.go:172] (0xc000616780) (3) Data frame sent\nI0203 22:02:53.751171    3994 log.go:172] (0xc000944000) Data frame received for 3\nI0203 22:02:53.751181    3994 log.go:172] (0xc000616780) (3) Data frame handling\nI0203 22:02:53.752128    3994 log.go:172] (0xc000944000) Data frame received for 1\nI0203 22:02:53.752194    3994 log.go:172] (0xc000699cc0) (1) Data frame handling\nI0203 22:02:53.752247    3994 log.go:172] (0xc000699cc0) (1) Data frame sent\nI0203 22:02:53.752282    3994 log.go:172] (0xc000944000) (0xc000699cc0) Stream removed, broadcasting: 1\nI0203 22:02:53.752305    3994 log.go:172] (0xc000944000) Go away received\nI0203 22:02:53.752655    3994 log.go:172] (0xc000944000) (0xc000699cc0) Stream removed, broadcasting: 1\nI0203 22:02:53.752675    3994 log.go:172] (0xc000944000) (0xc000616780) Stream removed, broadcasting: 3\nI0203 22:02:53.752687    3994 log.go:172] (0xc000944000) (0xc0002fb540) Stream removed, broadcasting: 5\n"
Feb  3 22:02:53.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:02:53.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:02:53.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:02:53.958: INFO: stderr: "I0203 22:02:53.890436    4015 log.go:172] (0xc000ac0a50) (0xc0008701e0) Create stream\nI0203 22:02:53.890494    4015 log.go:172] (0xc000ac0a50) (0xc0008701e0) Stream added, broadcasting: 1\nI0203 22:02:53.893447    4015 log.go:172] (0xc000ac0a50) Reply frame received for 1\nI0203 22:02:53.893510    4015 log.go:172] (0xc000ac0a50) (0xc000667ae0) Create stream\nI0203 22:02:53.893526    4015 log.go:172] (0xc000ac0a50) (0xc000667ae0) Stream added, broadcasting: 3\nI0203 22:02:53.894469    4015 log.go:172] (0xc000ac0a50) Reply frame received for 3\nI0203 22:02:53.894490    4015 log.go:172] (0xc000ac0a50) (0xc000870280) Create stream\nI0203 22:02:53.894496    4015 log.go:172] (0xc000ac0a50) (0xc000870280) Stream added, broadcasting: 5\nI0203 22:02:53.895353    4015 log.go:172] (0xc000ac0a50) Reply frame received for 5\nI0203 22:02:53.952457    4015 log.go:172] (0xc000ac0a50) Data frame received for 3\nI0203 22:02:53.952511    4015 log.go:172] (0xc000667ae0) (3) Data frame handling\nI0203 22:02:53.952533    4015 log.go:172] (0xc000667ae0) (3) Data frame sent\nI0203 22:02:53.952566    4015 log.go:172] (0xc000ac0a50) Data frame received for 5\nI0203 22:02:53.952583    4015 log.go:172] (0xc000870280) (5) Data frame handling\nI0203 22:02:53.952603    4015 log.go:172] (0xc000870280) (5) Data frame sent\nI0203 22:02:53.952622    4015 log.go:172] (0xc000ac0a50) Data frame received for 5\nI0203 22:02:53.952637    4015 log.go:172] (0xc000870280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:02:53.953144    4015 log.go:172] (0xc000ac0a50) Data frame received for 3\nI0203 22:02:53.953173    4015 log.go:172] (0xc000667ae0) (3) Data frame handling\nI0203 22:02:53.954084    4015 log.go:172] (0xc000ac0a50) Data frame received for 1\nI0203 22:02:53.954114    4015 log.go:172] (0xc0008701e0) (1) Data frame handling\nI0203 22:02:53.954131    4015 log.go:172] (0xc0008701e0) (1) Data frame sent\nI0203 22:02:53.954146    4015 log.go:172] (0xc000ac0a50) (0xc0008701e0) Stream removed, broadcasting: 1\nI0203 22:02:53.954201    4015 log.go:172] (0xc000ac0a50) Go away received\nI0203 22:02:53.954492    4015 log.go:172] (0xc000ac0a50) (0xc0008701e0) Stream removed, broadcasting: 1\nI0203 22:02:53.954507    4015 log.go:172] (0xc000ac0a50) (0xc000667ae0) Stream removed, broadcasting: 3\nI0203 22:02:53.954516    4015 log.go:172] (0xc000ac0a50) (0xc000870280) Stream removed, broadcasting: 5\n"
Feb  3 22:02:53.959: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:02:53.959: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:02:53.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6411 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb  3 22:02:54.157: INFO: stderr: "I0203 22:02:54.079265    4036 log.go:172] (0xc000926840) (0xc000930140) Create stream\nI0203 22:02:54.079409    4036 log.go:172] (0xc000926840) (0xc000930140) Stream added, broadcasting: 1\nI0203 22:02:54.082168    4036 log.go:172] (0xc000926840) Reply frame received for 1\nI0203 22:02:54.082218    4036 log.go:172] (0xc000926840) (0xc000656780) Create stream\nI0203 22:02:54.082237    4036 log.go:172] (0xc000926840) (0xc000656780) Stream added, broadcasting: 3\nI0203 22:02:54.082915    4036 log.go:172] (0xc000926840) Reply frame received for 3\nI0203 22:02:54.082950    4036 log.go:172] (0xc000926840) (0xc0006b5b80) Create stream\nI0203 22:02:54.082966    4036 log.go:172] (0xc000926840) (0xc0006b5b80) Stream added, broadcasting: 5\nI0203 22:02:54.083627    4036 log.go:172] (0xc000926840) Reply frame received for 5\nI0203 22:02:54.149381    4036 log.go:172] (0xc000926840) Data frame received for 3\nI0203 22:02:54.149443    4036 log.go:172] (0xc000656780) (3) Data frame handling\nI0203 22:02:54.149461    4036 log.go:172] (0xc000656780) (3) Data frame sent\nI0203 22:02:54.149474    4036 log.go:172] (0xc000926840) Data frame received for 3\nI0203 22:02:54.149484    4036 log.go:172] (0xc000656780) (3) Data frame handling\nI0203 22:02:54.149520    4036 log.go:172] (0xc000926840) Data frame received for 5\nI0203 22:02:54.149539    4036 log.go:172] (0xc0006b5b80) (5) Data frame handling\nI0203 22:02:54.149555    4036 log.go:172] (0xc0006b5b80) (5) Data frame sent\nI0203 22:02:54.149564    4036 log.go:172] (0xc000926840) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 22:02:54.149570    4036 log.go:172] (0xc0006b5b80) (5) Data frame handling\nI0203 22:02:54.150671    4036 log.go:172] (0xc000926840) Data frame received for 1\nI0203 22:02:54.150683    4036 log.go:172] (0xc000930140) (1) Data frame handling\nI0203 22:02:54.150689    4036 log.go:172] (0xc000930140) (1) Data frame sent\nI0203 22:02:54.150848    4036 log.go:172] (0xc000926840) (0xc000930140) Stream removed, broadcasting: 1\nI0203 22:02:54.150867    4036 log.go:172] (0xc000926840) Go away received\nI0203 22:02:54.151390    4036 log.go:172] (0xc000926840) (0xc000930140) Stream removed, broadcasting: 1\nI0203 22:02:54.151428    4036 log.go:172] (0xc000926840) (0xc000656780) Stream removed, broadcasting: 3\nI0203 22:02:54.151444    4036 log.go:172] (0xc000926840) (0xc0006b5b80) Stream removed, broadcasting: 5\n"
Feb  3 22:02:54.157: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb  3 22:02:54.157: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb  3 22:02:54.157: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  3 22:03:14.195: INFO: Deleting all statefulset in ns statefulset-6411
Feb  3 22:03:14.198: INFO: Scaling statefulset ss to 0
Feb  3 22:03:14.207: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 22:03:14.210: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:03:14.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6411" for this suite.

• [SLOW TEST:82.517 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":277,"skipped":4537,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  3 22:03:14.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-24680746-47b5-4764-a917-738022e5e06a
STEP: Creating a pod to test consume configMaps
Feb  3 22:03:14.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718" in namespace "configmap-8373" to be "success or failure"
Feb  3 22:03:14.351: INFO: Pod "pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718": Phase="Pending", Reason="", readiness=false. Elapsed: 21.839627ms
Feb  3 22:03:16.447: INFO: Pod "pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117844758s
Feb  3 22:03:18.451: INFO: Pod "pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121834065s
STEP: Saw pod success
Feb  3 22:03:18.451: INFO: Pod "pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718" satisfied condition "success or failure"
Feb  3 22:03:18.454: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718 container configmap-volume-test: 
STEP: delete the pod
Feb  3 22:03:18.527: INFO: Waiting for pod pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718 to disappear
Feb  3 22:03:18.536: INFO: Pod pod-configmaps-c1c92879-ca08-4955-a854-d9e9ba273718 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  3 22:03:18.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8373" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  3 22:03:18.544: INFO: Running AfterSuite actions on all nodes
Feb  3 22:03:18.544: INFO: Running AfterSuite actions on node 1
Feb  3 22:03:18.544: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4568,"failed":0}

Ran 278 of 4846 Specs in 4381.758 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4568 Skipped
PASS