Running Suite: Kubernetes e2e suite =================================== Random Seed: 1639472561 - Will randomize all specs Will run 6432 specs Running in parallel across 10 nodes Dec 14 09:02:44.411: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.416: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 14 09:02:44.447: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 14 09:02:44.500: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 14 09:02:44.500: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 14 09:02:44.500: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 14 09:02:44.515: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Dec 14 09:02:44.515: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Dec 14 09:02:44.515: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 14 09:02:44.515: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Dec 14 09:02:44.515: INFO: e2e test version: v1.22.2 Dec 14 09:02:44.517: INFO: kube-apiserver version: v1.22.0 Dec 14 09:02:44.517: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.524: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Dec 14 09:02:44.521: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.546: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Dec 14 09:02:44.528: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.553: INFO: Cluster IP family: ipv4 SS ------------------------------ Dec 14 09:02:44.531: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.557: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Dec 14 09:02:44.546: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.567: INFO: Cluster IP family: ipv4 S ------------------------------ Dec 14 09:02:44.547: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.567: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Dec 14 09:02:44.547: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.573: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Dec 14 09:02:44.560: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.581: INFO: Cluster IP family: ipv4 S ------------------------------ Dec 14 09:02:44.560: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.583: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Dec 14 09:02:44.568: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:02:44.589: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test W1214 09:02:44.600868 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.601: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.605: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:44.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2743" for this suite. •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test W1214 09:02:44.642356 43 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.642: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.645: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:44.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3202" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1214 09:02:44.611202 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.611: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.618: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:02:44.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245" in namespace "downward-api-5764" to be "Succeeded or Failed" Dec 14 09:02:44.643: INFO: Pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.97027ms Dec 14 09:02:46.648: INFO: Pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007762026s Dec 14 09:02:48.651: INFO: Pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011185154s Dec 14 09:02:50.656: INFO: Pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015929402s STEP: Saw pod success Dec 14 09:02:50.656: INFO: Pod "downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245" satisfied condition "Succeeded or Failed" Dec 14 09:02:50.659: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245 container client-container: STEP: delete the pod Dec 14 09:02:51.041: INFO: Waiting for pod downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245 to disappear Dec 14 09:02:51.044: INFO: Pod downwardapi-volume-45b8dbdc-427b-4615-85e8-eb83b142c245 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:51.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5764" for this suite. • [SLOW TEST:6.469 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1214 09:02:44.593924 56 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.594: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.602: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-421.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-421.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-421.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-421.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-421.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-421.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:02:52.670: INFO: DNS probes using dns-421/dns-test-b8c552b1-b4ce-4aac-8536-78329732407a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-421" for this suite. • [SLOW TEST:8.142 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1214 09:02:44.648043 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.648: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.651: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-4ea281be-41ee-4f9e-86c6-7b84c16a4673 STEP: Creating a pod to test consume secrets Dec 14 09:02:44.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae" in namespace "projected-9170" to be "Succeeded or Failed" Dec 14 09:02:44.667: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199896ms Dec 14 09:02:46.670: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005464973s Dec 14 09:02:48.675: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010217392s Dec 14 09:02:50.680: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015235991s Dec 14 09:02:52.686: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021623833s STEP: Saw pod success Dec 14 09:02:52.686: INFO: Pod "pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae" satisfied condition "Succeeded or Failed" Dec 14 09:02:52.690: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae container projected-secret-volume-test: STEP: delete the pod Dec 14 09:02:52.701: INFO: Waiting for pod pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae to disappear Dec 14 09:02:52.703: INFO: Pod pod-projected-secrets-a568bc88-b74f-4a5b-8dfc-7839f1fe9cae no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:52.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9170" for this suite. • [SLOW TEST:8.107 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1214 09:02:44.683261 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.683: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.687: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1581.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1581.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1581.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1581.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1581.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1581.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:02:52.739: INFO: DNS probes using dns-1581/dns-test-7a456275-3f91-413f-8927-16d3243dbfbc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:52.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1581" for this suite. • [SLOW TEST:8.101 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:52.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-63c9c352-fa17-4eeb-867d-87baaf18d551 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:52.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9126" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:52.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:52.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9965" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:52.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:02:52.854: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:53.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1088" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1214 09:02:44.598092 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.598: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.602: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:02:44.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167" in namespace "projected-8799" to be "Succeeded or Failed" Dec 14 09:02:44.619: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Pending", Reason="", readiness=false. Elapsed: 5.686939ms Dec 14 09:02:46.625: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011525639s Dec 14 09:02:48.631: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017078732s Dec 14 09:02:50.635: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021556537s Dec 14 09:02:52.639: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02573068s Dec 14 09:02:54.643: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02992077s STEP: Saw pod success Dec 14 09:02:54.644: INFO: Pod "downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167" satisfied condition "Succeeded or Failed" Dec 14 09:02:54.647: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167 container client-container: STEP: delete the pod Dec 14 09:02:55.035: INFO: Waiting for pod downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167 to disappear Dec 14 09:02:55.038: INFO: Pod downwardapi-volume-76c6541b-da16-41eb-8510-f61cb2e3d167 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:55.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8799" for this suite. • [SLOW TEST:10.485 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1214 09:02:44.671404 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.671: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.674: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-e02b9ce1-7525-43e0-a76d-4bfa27679505 STEP: Creating a pod to test consume configMaps Dec 14 09:02:44.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210" in namespace "projected-7560" to be "Succeeded or Failed" Dec 14 09:02:44.688: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431325ms Dec 14 09:02:46.694: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007711822s Dec 14 09:02:48.698: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012350482s Dec 14 09:02:50.703: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016791948s Dec 14 09:02:52.707: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020601473s Dec 14 09:02:54.711: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025340332s STEP: Saw pod success Dec 14 09:02:54.711: INFO: Pod "pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210" satisfied condition "Succeeded or Failed" Dec 14 09:02:54.715: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210 container agnhost-container: STEP: delete the pod Dec 14 09:02:55.235: INFO: Waiting for pod pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210 to disappear Dec 14 09:02:55.239: INFO: Pod pod-projected-configmaps-54c77a46-12be-4836-9331-6fd516548210 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:55.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7560" for this suite. • [SLOW TEST:10.601 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Dec 14 09:02:44.685: INFO: observed Pod pod-test in namespace pods-9025 in phase Pending with labels: map[test-pod-static:true] & conditions [] Dec 14 09:02:44.687: INFO: observed Pod pod-test in namespace pods-9025 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC }] Dec 14 09:02:45.236: INFO: observed Pod pod-test in namespace pods-9025 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC }] Dec 14 09:02:50.236: INFO: Found Pod pod-test in namespace pods-9025 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:02:44 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Dec 14 09:02:50.255: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Dec 14 09:02:50.277: INFO: observed event type ADDED Dec 14 09:02:50.277: INFO: observed event type MODIFIED Dec 14 09:02:50.278: INFO: observed event type MODIFIED Dec 14 09:02:50.278: INFO: observed event type MODIFIED Dec 14 09:02:50.278: INFO: observed event type MODIFIED Dec 14 09:02:50.278: INFO: observed event type MODIFIED Dec 14 09:02:50.278: INFO: observed event type MODIFIED Dec 14 09:02:55.034: INFO: observed event type MODIFIED Dec 14 09:02:55.434: INFO: observed event type MODIFIED Dec 14 09:02:55.835: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:55.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9025" for this suite. • [SLOW TEST:11.204 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl W1214 09:02:44.724257 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.724: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.728: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Dec 14 09:02:44.732: INFO: namespace kubectl-5701 Dec 14 09:02:44.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5701 create -f -' Dec 14 09:02:45.096: INFO: stderr: "" Dec 14 09:02:45.096: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Dec 14 09:02:46.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:46.101: INFO: Found 0 / 1 Dec 14 09:02:47.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:47.101: INFO: Found 0 / 1 Dec 14 09:02:48.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:48.101: INFO: Found 0 / 1 Dec 14 09:02:49.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:49.101: INFO: Found 0 / 1 Dec 14 09:02:50.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:50.101: INFO: Found 0 / 1 Dec 14 09:02:51.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:51.101: INFO: Found 0 / 1 Dec 14 09:02:52.101: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:52.101: INFO: Found 0 / 1 Dec 14 09:02:53.100: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:53.100: INFO: Found 1 / 1 Dec 14 09:02:53.100: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 14 09:02:53.104: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:02:53.105: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 14 09:02:53.105: INFO: wait on agnhost-primary startup in kubectl-5701 Dec 14 09:02:53.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5701 logs agnhost-primary-vggmn agnhost-primary' Dec 14 09:02:53.236: INFO: stderr: "" Dec 14 09:02:53.236: INFO: stdout: "Paused\n" STEP: exposing RC Dec 14 09:02:53.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5701 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Dec 14 09:02:53.380: INFO: stderr: "" Dec 14 09:02:53.380: INFO: stdout: "service/rm2 exposed\n" Dec 14 09:02:53.383: INFO: Service rm2 in namespace kubectl-5701 found. STEP: exposing service Dec 14 09:02:55.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5701 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Dec 14 09:02:55.523: INFO: stderr: "" Dec 14 09:02:55.523: INFO: stdout: "service/rm3 exposed\n" Dec 14 09:02:55.526: INFO: Service rm3 in namespace kubectl-5701 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:57.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5701" for this suite. • [SLOW TEST:12.847 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:52.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-a0bac300-bf41-4737-b95c-0b538478b58f STEP: Creating a pod to test consume secrets Dec 14 09:02:52.775: INFO: Waiting up to 5m0s for pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3" in namespace "secrets-2121" to be "Succeeded or Failed" Dec 14 09:02:52.778: INFO: Pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41186ms Dec 14 09:02:54.783: INFO: Pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007529411s Dec 14 09:02:56.788: INFO: Pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012435089s Dec 14 09:02:58.792: INFO: Pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016681909s STEP: Saw pod success Dec 14 09:02:58.792: INFO: Pod "pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3" satisfied condition "Succeeded or Failed" Dec 14 09:02:58.795: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3 container secret-volume-test: STEP: delete the pod Dec 14 09:02:58.813: INFO: Waiting for pod pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3 to disappear Dec 14 09:02:58.816: INFO: Pod pod-secrets-580cc1c1-57c2-4c2f-8e98-fc6ce89f82a3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:58.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2121" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:51.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-11f14e82-9be3-4402-80ee-6387a5034948 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:59.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1156" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:55.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:02:55.191: INFO: The status of Pod busybox-scheduling-c7f0a446-b10a-4091-9dfa-e35f5fabdce3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:57.195: INFO: The status of Pod busybox-scheduling-c7f0a446-b10a-4091-9dfa-e35f5fabdce3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:59.196: INFO: The status of Pod busybox-scheduling-c7f0a446-b10a-4091-9dfa-e35f5fabdce3 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:02:59.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9955" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:52.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running Dec 14 09:02:54.981: INFO: running pods: 0 < 1 Dec 14 09:02:56.986: INFO: running pods: 0 < 1 Dec 14 09:02:58.986: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:01.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5902" for this suite. • [SLOW TEST:8.103 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":3,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook W1214 09:02:44.797266 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Dec 14 09:02:44.797: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Dec 14 09:02:44.801: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Dec 14 09:02:44.815: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:46.819: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:48.822: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:50.821: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Dec 14 09:02:50.833: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:52.837: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:54.837: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:56.837: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:02:58.837: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Dec 14 09:02:58.844: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 14 09:02:58.847: INFO: Pod pod-with-prestop-exec-hook still exists Dec 14 09:03:00.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 14 09:03:00.854: INFO: Pod pod-with-prestop-exec-hook still exists Dec 14 09:03:02.848: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 14 09:03:02.852: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2634" for this suite. • [SLOW TEST:18.106 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:02.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-b62e50d5-aade-432d-a32e-d7b3c896e0cb [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:02.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1750" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":2,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:55.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:02:56.684: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:02:58.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069376, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069376, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069376, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069376, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:03:01.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:01.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9560-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:04.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5256" for this suite. STEP: Destroying namespace "webhook-5256-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.021 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:58.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Dec 14 09:02:58.896: INFO: Waiting up to 5m0s for pod "downward-api-569cf295-286b-449f-84ae-671832522ba2" in namespace "downward-api-5426" to be "Succeeded or Failed" Dec 14 09:02:58.899: INFO: Pod "downward-api-569cf295-286b-449f-84ae-671832522ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766227ms Dec 14 09:03:00.903: INFO: Pod "downward-api-569cf295-286b-449f-84ae-671832522ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00663048s Dec 14 09:03:02.909: INFO: Pod "downward-api-569cf295-286b-449f-84ae-671832522ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012598897s Dec 14 09:03:04.913: INFO: Pod "downward-api-569cf295-286b-449f-84ae-671832522ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016987198s STEP: Saw pod success Dec 14 09:03:04.913: INFO: Pod "downward-api-569cf295-286b-449f-84ae-671832522ba2" satisfied condition "Succeeded or Failed" Dec 14 09:03:04.916: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downward-api-569cf295-286b-449f-84ae-671832522ba2 container dapi-container: STEP: delete the pod Dec 14 09:03:04.933: INFO: Waiting for pod downward-api-569cf295-286b-449f-84ae-671832522ba2 to disappear Dec 14 09:03:04.936: INFO: Pod downward-api-569cf295-286b-449f-84ae-671832522ba2 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:04.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5426" for this suite. • [SLOW TEST:6.086 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:55.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 STEP: creating an pod Dec 14 09:02:55.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Dec 14 09:02:55.424: INFO: stderr: "" Dec 14 09:02:55.424: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Dec 14 09:02:55.424: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Dec 14 09:02:55.424: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6240" to be "running and ready, or succeeded" Dec 14 09:02:55.428: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.153646ms Dec 14 09:02:57.433: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008243413s Dec 14 09:02:59.438: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.013671475s Dec 14 09:02:59.438: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Dec 14 09:02:59.438: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Dec 14 09:02:59.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator' Dec 14 09:02:59.567: INFO: stderr: "" Dec 14 09:02:59.567: INFO: stdout: "I1214 09:02:56.632245 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/46k 520\nI1214 09:02:56.833258 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/5jkm 530\nI1214 09:02:57.032784 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/5prj 418\nI1214 09:02:57.233390 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/hpw6 509\nI1214 09:02:57.432929 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/8tt 279\nI1214 09:02:57.632378 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/c9x 377\nI1214 09:02:57.832773 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/xmp 271\nI1214 09:02:58.033320 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/9b6 317\nI1214 09:02:58.232648 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qrz 501\nI1214 09:02:58.433110 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/s8l 582\nI1214 09:02:58.632441 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/fcm 275\nI1214 09:02:58.832938 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/sq47 427\nI1214 09:02:59.032307 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/4kn 349\nI1214 09:02:59.232669 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/lxh 352\nI1214 09:02:59.433254 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/xblf 487\n" STEP: limiting log lines Dec 14 09:02:59.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator --tail=1' Dec 14 09:02:59.691: INFO: stderr: "" Dec 14 09:02:59.691: INFO: stdout: "I1214 09:02:59.632770 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/2wn 355\n" Dec 14 09:02:59.691: INFO: got output "I1214 09:02:59.632770 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/2wn 355\n" STEP: limiting log bytes Dec 14 09:02:59.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator --limit-bytes=1' Dec 14 09:02:59.809: INFO: stderr: "" Dec 14 09:02:59.809: INFO: stdout: "I" Dec 14 09:02:59.810: INFO: got output "I" STEP: exposing timestamps Dec 14 09:02:59.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator --tail=1 --timestamps' Dec 14 09:02:59.935: INFO: stderr: "" Dec 14 09:02:59.935: INFO: stdout: "2021-12-14T09:02:59.833645671Z I1214 09:02:59.833334 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/q7vm 462\n" Dec 14 09:02:59.935: INFO: got output "2021-12-14T09:02:59.833645671Z I1214 09:02:59.833334 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/q7vm 462\n" STEP: restricting to a time range Dec 14 09:03:02.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator --since=1s' Dec 14 09:03:02.561: INFO: stderr: "" Dec 14 09:03:02.561: INFO: stdout: "I1214 09:03:01.632911 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/dr9 275\nI1214 09:03:01.833324 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/tbl4 378\nI1214 09:03:02.032809 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/tck 355\nI1214 09:03:02.233285 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/789 306\nI1214 09:03:02.432652 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/tgv2 229\n" Dec 14 09:03:02.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 logs logs-generator logs-generator --since=24h' Dec 14 09:03:02.683: INFO: stderr: "" Dec 14 09:03:02.683: INFO: stdout: "I1214 09:02:56.632245 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/46k 520\nI1214 09:02:56.833258 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/5jkm 530\nI1214 09:02:57.032784 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/5prj 418\nI1214 09:02:57.233390 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/hpw6 509\nI1214 09:02:57.432929 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/8tt 279\nI1214 09:02:57.632378 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/c9x 377\nI1214 09:02:57.832773 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/xmp 271\nI1214 09:02:58.033320 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/9b6 317\nI1214 09:02:58.232648 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qrz 501\nI1214 09:02:58.433110 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/s8l 582\nI1214 09:02:58.632441 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/fcm 275\nI1214 09:02:58.832938 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/sq47 427\nI1214 09:02:59.032307 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/4kn 349\nI1214 09:02:59.232669 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/lxh 352\nI1214 09:02:59.433254 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/xblf 487\nI1214 09:02:59.632770 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/2wn 355\nI1214 09:02:59.833334 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/q7vm 462\nI1214 09:03:00.032785 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/qsm 203\nI1214 09:03:00.233326 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/tqkl 336\nI1214 09:03:00.432876 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/vcb9 552\nI1214 09:03:00.633394 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/nh5 586\nI1214 09:03:00.832907 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/wk6 344\nI1214 09:03:01.033384 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/znj 597\nI1214 09:03:01.232812 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/7trc 218\nI1214 09:03:01.433210 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/k6h6 302\nI1214 09:03:01.632911 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/dr9 275\nI1214 09:03:01.833324 1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/tbl4 378\nI1214 09:03:02.032809 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/tck 355\nI1214 09:03:02.233285 1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/789 306\nI1214 09:03:02.432652 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/tgv2 229\nI1214 09:03:02.633123 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/sm5 590\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 Dec 14 09:03:02.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6240 delete pod logs-generator' Dec 14 09:03:05.243: INFO: stderr: "" Dec 14 09:03:05.243: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:05.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6240" for this suite. • [SLOW TEST:9.972 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:59.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:02:59.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401" in namespace "projected-8055" to be "Succeeded or Failed" Dec 14 09:02:59.247: INFO: Pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889739ms Dec 14 09:03:01.251: INFO: Pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006574641s Dec 14 09:03:03.255: INFO: Pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01124635s Dec 14 09:03:05.260: INFO: Pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015512337s STEP: Saw pod success Dec 14 09:03:05.260: INFO: Pod "downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401" satisfied condition "Succeeded or Failed" Dec 14 09:03:05.262: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401 container client-container: STEP: delete the pod Dec 14 09:03:05.275: INFO: Waiting for pod downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401 to disappear Dec 14 09:03:05.278: INFO: Pod downwardapi-volume-8319d78f-582a-45df-923e-8e636c939401 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:05.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8055" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:57.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:02:57.669: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 14 09:03:01.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9926 --namespace=crd-publish-openapi-9926 create -f -' Dec 14 09:03:01.889: INFO: stderr: "" Dec 14 09:03:01.889: INFO: stdout: "e2e-test-crd-publish-openapi-5926-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 14 09:03:01.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9926 --namespace=crd-publish-openapi-9926 delete e2e-test-crd-publish-openapi-5926-crds test-cr' Dec 14 09:03:02.016: INFO: stderr: "" Dec 14 09:03:02.016: INFO: stdout: "e2e-test-crd-publish-openapi-5926-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Dec 14 09:03:02.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9926 --namespace=crd-publish-openapi-9926 apply -f -' Dec 14 09:03:02.243: INFO: stderr: "" Dec 14 09:03:02.243: INFO: stdout: "e2e-test-crd-publish-openapi-5926-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Dec 14 09:03:02.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9926 --namespace=crd-publish-openapi-9926 delete e2e-test-crd-publish-openapi-5926-crds test-cr' Dec 14 09:03:02.354: INFO: stderr: "" Dec 14 09:03:02.354: INFO: stdout: "e2e-test-crd-publish-openapi-5926-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 14 09:03:02.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9926 explain e2e-test-crd-publish-openapi-5926-crds' Dec 14 09:03:02.571: INFO: stderr: "" Dec 14 09:03:02.571: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5926-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:06.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9926" for this suite. • [SLOW TEST:8.777 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":2,"skipped":84,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:53.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:02:53.481: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Dec 14 09:02:53.490: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 14 09:02:58.496: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 14 09:03:02.505: INFO: Creating deployment "test-rolling-update-deployment" Dec 14 09:03:02.510: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Dec 14 09:03:02.518: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Dec 14 09:03:04.527: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Dec 14 09:03:04.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:03:06.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069382, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:03:08.537: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:03:08.549: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8470 fcf0ef57-b1b8-4fbd-9897-e4429194b4e0 13942299 1 2021-12-14 09:03:02 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-12-14 09:03:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f7d6f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-12-14 09:03:02 +0000 UTC,LastTransitionTime:2021-12-14 09:03:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2021-12-14 09:03:06 +0000 UTC,LastTransitionTime:2021-12-14 09:03:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 14 09:03:08.553: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-8470 e36afc71-3be2-4e65-ad58-08c912e0365c 13942290 1 2021-12-14 09:03:02 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fcf0ef57-b1b8-4fbd-9897-e4429194b4e0 0xc003f7dbe7 0xc003f7dbe8}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:03:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fcf0ef57-b1b8-4fbd-9897-e4429194b4e0\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:03:06 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f7dc98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:03:08.553: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Dec 14 09:03:08.554: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8470 8b5c6177-8232-4255-a904-d8453695e398 13942298 2 2021-12-14 09:02:53 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fcf0ef57-b1b8-4fbd-9897-e4429194b4e0 0xc003f7dab7 0xc003f7dab8}] [] [{e2e.test Update apps/v1 2021-12-14 09:02:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fcf0ef57-b1b8-4fbd-9897-e4429194b4e0\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:03:06 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003f7db78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:03:08.559: INFO: Pod "test-rolling-update-deployment-585b757574-cjfwl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-cjfwl test-rolling-update-deployment-585b757574- deployment-8470 6b0b8776-1064-4d16-b232-fda807800a23 13942289 0 2021-12-14 09:03:02 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 e36afc71-3be2-4e65-ad58-08c912e0365c 0xc0043820d7 0xc0043820d8}] [] [{kube-controller-manager Update v1 2021-12-14 09:03:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e36afc71-3be2-4e65-ad58-08c912e0365c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:03:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.188\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mjz25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjz25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:03:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:03:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:03:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:03:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.188,StartTime:2021-12-14 09:03:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:03:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://c0689eb9ad3e7ac7833d7a37c802555c369ed7a1c1ca80932bfcce4626fd3d1a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:08.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8470" for this suite. • [SLOW TEST:15.122 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:01.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:05.121: INFO: Deleting pod "var-expansion-300d0f70-3355-4866-b23c-8c410f52344b" in namespace "var-expansion-741" Dec 14 09:03:05.127: INFO: Wait up to 5m0s for pod "var-expansion-300d0f70-3355-4866-b23c-8c410f52344b" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:09.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-741" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":4,"skipped":88,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:09.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics Dec 14 09:03:10.259: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:03:11.148: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:11.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3107" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":5,"skipped":89,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:59.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Dec 14 09:02:59.292: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:01.296: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:03.296: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:05.296: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Dec 14 09:03:05.310: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:07.314: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:09.315: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 14 09:03:09.319: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:09.319: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:09.960: INFO: Exec stderr: "" Dec 14 09:03:09.960: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:09.960: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.099: INFO: Exec stderr: "" Dec 14 09:03:10.099: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.099: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.222: INFO: Exec stderr: "" Dec 14 09:03:10.222: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.222: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.360: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 14 09:03:10.360: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.360: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.460: INFO: Exec stderr: "" Dec 14 09:03:10.460: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.460: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.621: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 14 09:03:10.621: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.622: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.777: INFO: Exec stderr: "" Dec 14 09:03:10.777: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.777: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:10.929: INFO: Exec stderr: "" Dec 14 09:03:10.929: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:10.930: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:11.084: INFO: Exec stderr: "" Dec 14 09:03:11.084: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9738 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:11.084: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:11.223: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:11.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9738" for this suite. • [SLOW TEST:11.981 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:11.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:11.230: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:12.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9565" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":6,"skipped":102,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:04.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Dec 14 09:03:05.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-414 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Dec 14 09:03:05.134: INFO: stderr: "" Dec 14 09:03:05.134: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 Dec 14 09:03:05.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-414 delete pods e2e-test-httpd-pod' Dec 14 09:03:12.847: INFO: stderr: "" Dec 14 09:03:12.847: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:12.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-414" for this suite. • [SLOW TEST:7.876 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:05.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Dec 14 09:03:05.295: INFO: Waiting up to 5m0s for pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4" in namespace "containers-1989" to be "Succeeded or Failed" Dec 14 09:03:05.298: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971521ms Dec 14 09:03:07.302: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006716387s Dec 14 09:03:09.308: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012215716s Dec 14 09:03:11.311: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015769275s Dec 14 09:03:13.317: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02148725s STEP: Saw pod success Dec 14 09:03:13.317: INFO: Pod "client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4" satisfied condition "Succeeded or Failed" Dec 14 09:03:13.321: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4 container agnhost-container: STEP: delete the pod Dec 14 09:03:13.336: INFO: Waiting for pod client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4 to disappear Dec 14 09:03:13.340: INFO: Pod client-containers-f8bf2d70-6548-4a4e-a1b4-aab2c345ddf4 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:13.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1989" for this suite. • [SLOW TEST:8.092 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:08.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Dec 14 09:03:08.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5714 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Dec 14 09:03:08.763: INFO: stderr: "" Dec 14 09:03:08.763: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Dec 14 09:03:13.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5714 get pod e2e-test-httpd-pod -o json' Dec 14 09:03:13.925: INFO: stderr: "" Dec 14 09:03:13.925: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-12-14T09:03:08Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5714\",\n \"resourceVersion\": \"13942480\",\n \"uid\": \"e07a428c-a169-4737-9a9e-7573f034e66f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-mkggr\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"capi-v1.22-md-0-698f477975-vkd62\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-mkggr\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-12-14T09:03:08Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-12-14T09:03:11Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-12-14T09:03:11Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-12-14T09:03:08Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://837311d52707198bc3800b11c9406a456d5efa6c116f38eea7f468809700945e\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-12-14T09:03:10Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.25.0.9\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.2.110\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.2.110\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-12-14T09:03:08Z\"\n }\n}\n" STEP: replace the image in the pod Dec 14 09:03:13.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5714 replace -f -' Dec 14 09:03:14.220: INFO: stderr: "" Dec 14 09:03:14.220: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Dec 14 09:03:14.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5714 delete pods e2e-test-httpd-pod' Dec 14 09:03:16.544: INFO: stderr: "" Dec 14 09:03:16.544: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:16.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5714" for this suite. • [SLOW TEST:7.946 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":5,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:12.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-251467e3-e551-43ec-9716-7e948906d2e5 STEP: Creating a pod to test consume secrets Dec 14 09:03:12.907: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8" in namespace "projected-7240" to be "Succeeded or Failed" Dec 14 09:03:12.910: INFO: Pod "pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364453ms Dec 14 09:03:14.915: INFO: Pod "pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008534456s Dec 14 09:03:16.919: INFO: Pod "pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012624535s STEP: Saw pod success Dec 14 09:03:16.920: INFO: Pod "pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8" satisfied condition "Succeeded or Failed" Dec 14 09:03:16.924: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8 container projected-secret-volume-test: STEP: delete the pod Dec 14 09:03:16.938: INFO: Waiting for pod pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8 to disappear Dec 14 09:03:16.941: INFO: Pod pod-projected-secrets-614af75c-5724-4bd4-946b-794d6546f6a8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:16.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7240" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:11.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Dec 14 09:03:11.376: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:13.383: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Dec 14 09:03:13.403: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:15.407: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Dec 14 09:03:15.416: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 14 09:03:15.419: INFO: Pod pod-with-prestop-http-hook still exists Dec 14 09:03:17.420: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 14 09:03:17.423: INFO: Pod pod-with-prestop-http-hook still exists Dec 14 09:03:19.421: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 14 09:03:19.425: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:19.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6631" for this suite. • [SLOW TEST:8.106 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:16.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:03:17.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:03:19.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069397, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069397, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069397, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069397, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:03:22.351: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:22.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1649" for this suite. STEP: Destroying namespace "webhook-1649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:02:44.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics Dec 14 09:03:24.942: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:03:25.799: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Dec 14 09:03:25.799: INFO: Deleting pod "simpletest.rc-4ks9z" in namespace "gc-1230" Dec 14 09:03:25.808: INFO: Deleting pod "simpletest.rc-6xtcx" in namespace "gc-1230" Dec 14 09:03:25.818: INFO: Deleting pod "simpletest.rc-7l99t" in namespace "gc-1230" Dec 14 09:03:25.824: INFO: Deleting pod "simpletest.rc-f5jj4" in namespace "gc-1230" Dec 14 09:03:25.831: INFO: Deleting pod "simpletest.rc-fhfg8" in namespace "gc-1230" Dec 14 09:03:25.840: INFO: Deleting pod "simpletest.rc-jp8rk" in namespace "gc-1230" Dec 14 09:03:25.846: INFO: Deleting pod "simpletest.rc-qjl7w" in namespace "gc-1230" Dec 14 09:03:25.853: INFO: Deleting pod "simpletest.rc-qzbqw" in namespace "gc-1230" Dec 14 09:03:25.859: INFO: Deleting pod "simpletest.rc-r75vm" in namespace "gc-1230" Dec 14 09:03:25.865: INFO: Deleting pod "simpletest.rc-vpqgq" in namespace "gc-1230" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:25.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1230" for this suite. • [SLOW TEST:41.103 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:22.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Dec 14 09:03:26.496: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-455 PodName:pod-sharedvolume-ca8347f4-10df-4b70-86ef-5bdee5160b27 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:03:26.496: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:26.636: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:26.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-455" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:19.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Dec 14 09:03:19.633: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Dec 14 09:03:19.638: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Dec 14 09:03:19.638: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Dec 14 09:03:19.646: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Dec 14 09:03:19.646: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Dec 14 09:03:19.653: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Dec 14 09:03:19.654: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Dec 14 09:03:26.682: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:26.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4787" for this suite. • [SLOW TEST:7.113 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":5,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:12.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-6424 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6424 to expose endpoints map[] Dec 14 09:03:12.359: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Dec 14 09:03:13.376: INFO: successfully validated that service endpoint-test2 in namespace services-6424 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-6424 Dec 14 09:03:13.399: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:15.403: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6424 to expose endpoints map[pod1:[80]] Dec 14 09:03:15.418: INFO: successfully validated that service endpoint-test2 in namespace services-6424 exposes endpoints map[pod1:[80]] STEP: Checking if the Service forwards traffic to pod1 Dec 14 09:03:15.418: INFO: Creating new exec pod Dec 14 09:03:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Dec 14 09:03:20.696: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Dec 14 09:03:20.696: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:03:20.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.200.70 80' Dec 14 09:03:20.955: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.137.200.70 80\nConnection to 10.137.200.70 80 port [tcp/http] succeeded!\n" Dec 14 09:03:20.956: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Creating pod pod2 in namespace services-6424 Dec 14 09:03:20.964: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:22.970: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:24.968: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6424 to expose endpoints map[pod1:[80] pod2:[80]] Dec 14 09:03:24.990: INFO: successfully validated that service endpoint-test2 in namespace services-6424 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Checking if the Service forwards traffic to pod1 and pod2 Dec 14 09:03:25.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Dec 14 09:03:26.238: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Dec 14 09:03:26.238: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:03:26.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.200.70 80' Dec 14 09:03:26.466: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.200.70 80\nConnection to 10.137.200.70 80 port [tcp/http] succeeded!\n" Dec 14 09:03:26.466: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod1 in namespace services-6424 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6424 to expose endpoints map[pod2:[80]] Dec 14 09:03:27.497: INFO: successfully validated that service endpoint-test2 in namespace services-6424 exposes endpoints map[pod2:[80]] STEP: Checking if the Service forwards traffic to pod2 Dec 14 09:03:28.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Dec 14 09:03:28.787: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Dec 14 09:03:28.787: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:03:28.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6424 exec execpodn8dt5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.200.70 80' Dec 14 09:03:29.051: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.200.70 80\nConnection to 10.137.200.70 80 port [tcp/http] succeeded!\n" Dec 14 09:03:29.051: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod2 in namespace services-6424 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6424 to expose endpoints map[] Dec 14 09:03:29.081: INFO: successfully validated that service endpoint-test2 in namespace services-6424 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:29.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6424" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:16.790 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":7,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:06.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Dec 14 09:03:06.456: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1505" for this suite. • [SLOW TEST:23.400 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":3,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:26.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Dec 14 09:03:26.759: INFO: The status of Pod annotationupdateb6436952-5972-483f-84f4-e7b8f38eb53c is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:28.763: INFO: The status of Pod annotationupdateb6436952-5972-483f-84f4-e7b8f38eb53c is Running (Ready = true) Dec 14 09:03:29.287: INFO: Successfully updated pod "annotationupdateb6436952-5972-483f-84f4-e7b8f38eb53c" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:31.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6893" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":165,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:31.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:31.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4356" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":7,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:29.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:29.908: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Dec 14 09:03:29.920: INFO: The status of Pod pod-logs-websocket-5197b5a0-1d94-428d-bf91-54f7d8fe182b is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:31.935: INFO: The status of Pod pod-logs-websocket-5197b5a0-1d94-428d-bf91-54f7d8fe182b is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:31.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3982" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:29.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 14 09:03:32.216: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8996" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:13.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Dec 14 09:03:13.422: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:03:17.230: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5521" for this suite. • [SLOW TEST:19.007 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:26.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:26.778: INFO: Creating ReplicaSet my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee Dec 14 09:03:26.784: INFO: Pod name my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee: Found 0 pods out of 1 Dec 14 09:03:31.788: INFO: Pod name my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee: Found 1 pods out of 1 Dec 14 09:03:31.788: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee" is running Dec 14 09:03:31.791: INFO: Pod "my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee-b5pcv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:03:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:03:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:03:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:03:26 +0000 UTC Reason: Message:}]) Dec 14 09:03:31.791: INFO: Trying to dial the pod Dec 14 09:03:36.806: INFO: Controller my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee: Got expected result from replica 1 [my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee-b5pcv]: "my-hostname-basic-af00d503-efee-4e81-89ca-8b00620090ee-b5pcv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:36.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3100" for this suite. • [SLOW TEST:10.064 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":8,"skipped":97,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:32.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 14 09:03:32.061: INFO: Waiting up to 5m0s for pod "pod-fde63630-67f0-47ae-95be-24a85baa333e" in namespace "emptydir-724" to be "Succeeded or Failed" Dec 14 09:03:32.065: INFO: Pod "pod-fde63630-67f0-47ae-95be-24a85baa333e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.775943ms Dec 14 09:03:34.069: INFO: Pod "pod-fde63630-67f0-47ae-95be-24a85baa333e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007633026s Dec 14 09:03:36.074: INFO: Pod "pod-fde63630-67f0-47ae-95be-24a85baa333e": Phase="Running", Reason="", readiness=true. Elapsed: 4.013107053s Dec 14 09:03:38.079: INFO: Pod "pod-fde63630-67f0-47ae-95be-24a85baa333e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017840133s STEP: Saw pod success Dec 14 09:03:38.079: INFO: Pod "pod-fde63630-67f0-47ae-95be-24a85baa333e" satisfied condition "Succeeded or Failed" Dec 14 09:03:38.082: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-fde63630-67f0-47ae-95be-24a85baa333e container test-container: STEP: delete the pod Dec 14 09:03:38.096: INFO: Waiting for pod pod-fde63630-67f0-47ae-95be-24a85baa333e to disappear Dec 14 09:03:38.099: INFO: Pod pod-fde63630-67f0-47ae-95be-24a85baa333e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:38.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-724" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:04.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6820.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6820.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6820.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.88.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.88.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.88.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.88.42_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6820.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6820.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6820.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6820.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6820.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 42.88.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.88.42_udp@PTR;check="$$(dig +tcp +noall +answer +search 42.88.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.88.42_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:03:08.955: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:08.959: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:08.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:08.967: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:08.993: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:08.997: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:09.000: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:09.005: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:09.028: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:14.034: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.038: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.042: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.074: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.081: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.085: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:14.107: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:19.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.041: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.044: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.068: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.073: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.077: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.081: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:19.104: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:24.032: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.035: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.038: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.068: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.072: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.076: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.079: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:24.100: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:29.034: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.041: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.045: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.071: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.078: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.081: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:29.103: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:34.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.037: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.041: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.045: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.072: INFO: Unable to read jessie_udp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.075: INFO: Unable to read jessie_tcp@dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.079: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.084: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local from pod dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639: the server could not find the requested resource (get pods dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639) Dec 14 09:03:34.108: INFO: Lookups using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 failed for: [wheezy_udp@dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@dns-test-service.dns-6820.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_udp@dns-test-service.dns-6820.svc.cluster.local jessie_tcp@dns-test-service.dns-6820.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6820.svc.cluster.local] Dec 14 09:03:39.105: INFO: DNS probes using dns-6820/dns-test-eaad8dbd-2ea9-4754-bd9f-e9314e4be639 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6820" for this suite. • [SLOW TEST:34.259 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:39.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:39.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7668" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:32.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: creating the pod Dec 14 09:03:32.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 create -f -' Dec 14 09:03:32.860: INFO: stderr: "" Dec 14 09:03:32.860: INFO: stdout: "pod/pause created\n" Dec 14 09:03:32.860: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 14 09:03:32.860: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9853" to be "running and ready" Dec 14 09:03:32.864: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.690296ms Dec 14 09:03:34.868: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007696839s Dec 14 09:03:36.872: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011677476s Dec 14 09:03:38.877: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.016603364s Dec 14 09:03:38.877: INFO: Pod "pause" satisfied condition "running and ready" Dec 14 09:03:38.877: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Dec 14 09:03:38.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 label pods pause testing-label=testing-label-value' Dec 14 09:03:38.998: INFO: stderr: "" Dec 14 09:03:38.998: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 14 09:03:38.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 get pod pause -L testing-label' Dec 14 09:03:39.107: INFO: stderr: "" Dec 14 09:03:39.107: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 14 09:03:39.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 label pods pause testing-label-' Dec 14 09:03:39.228: INFO: stderr: "" Dec 14 09:03:39.228: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 14 09:03:39.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 get pod pause -L testing-label' Dec 14 09:03:39.336: INFO: stderr: "" Dec 14 09:03:39.336: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 STEP: using delete to clean up resources Dec 14 09:03:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 delete --grace-period=0 --force -f -' Dec 14 09:03:39.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:03:39.456: INFO: stdout: "pod \"pause\" force deleted\n" Dec 14 09:03:39.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 get rc,svc -l name=pause --no-headers' Dec 14 09:03:39.573: INFO: stderr: "No resources found in kubectl-9853 namespace.\n" Dec 14 09:03:39.573: INFO: stdout: "" Dec 14 09:03:39.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9853 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 14 09:03:39.682: INFO: stderr: "" Dec 14 09:03:39.682: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9853" for this suite. • [SLOW TEST:7.171 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":5,"skipped":98,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:05.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:03:11.372: INFO: DNS probes using dns-test-446a77ef-661d-41c0-ada5-3d0ce09069c2 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:03:17.406: INFO: File wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:17.410: INFO: File jessie_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:17.410: INFO: Lookups using dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b failed for: [wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local jessie_udp@dns-test-service-3.dns-445.svc.cluster.local] Dec 14 09:03:22.415: INFO: File wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:22.418: INFO: File jessie_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:22.418: INFO: Lookups using dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b failed for: [wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local jessie_udp@dns-test-service-3.dns-445.svc.cluster.local] Dec 14 09:03:27.415: INFO: File wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:27.418: INFO: File jessie_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:27.419: INFO: Lookups using dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b failed for: [wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local jessie_udp@dns-test-service-3.dns-445.svc.cluster.local] Dec 14 09:03:32.415: INFO: File wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:32.420: INFO: File jessie_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:32.420: INFO: Lookups using dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b failed for: [wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local jessie_udp@dns-test-service-3.dns-445.svc.cluster.local] Dec 14 09:03:37.414: INFO: File wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:37.417: INFO: File jessie_udp@dns-test-service-3.dns-445.svc.cluster.local from pod dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b contains 'foo.example.com. ' instead of 'bar.example.com.' Dec 14 09:03:37.418: INFO: Lookups using dns-445/dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b failed for: [wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local jessie_udp@dns-test-service-3.dns-445.svc.cluster.local] Dec 14 09:03:42.420: INFO: DNS probes using dns-test-b78626af-c77c-4569-87f8-7bdf35b5c93b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-445.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-445.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:03:46.482: INFO: DNS probes using dns-test-f73e081d-cfb7-4d29-ac07-e154b6ca8c56 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:46.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-445" for this suite. • [SLOW TEST:41.196 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Dec 14 09:03:38.195: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 14 09:03:43.200: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:47.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4451" for this suite. • [SLOW TEST:9.085 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":148,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:46.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-49adb6b8-fc90-4caa-9c96-568ef5f2c9d0 STEP: Creating the pod Dec 14 09:03:46.593: INFO: The status of Pod pod-projected-configmaps-b15baa82-9d42-4b97-9b84-d230174232b3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:48.598: INFO: The status of Pod pod-projected-configmaps-b15baa82-9d42-4b97-9b84-d230174232b3 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-49adb6b8-fc90-4caa-9c96-568ef5f2c9d0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:50.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2906" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:39.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics Dec 14 09:03:49.842: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:03:50.776: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Dec 14 09:03:50.776: INFO: Deleting pod "simpletest-rc-to-be-deleted-bbszs" in namespace "gc-1392" Dec 14 09:03:50.788: INFO: Deleting pod "simpletest-rc-to-be-deleted-ctkrd" in namespace "gc-1392" Dec 14 09:03:50.794: INFO: Deleting pod "simpletest-rc-to-be-deleted-fsm79" in namespace "gc-1392" Dec 14 09:03:50.801: INFO: Deleting pod "simpletest-rc-to-be-deleted-h2ktl" in namespace "gc-1392" Dec 14 09:03:50.810: INFO: Deleting pod "simpletest-rc-to-be-deleted-hslfq" in namespace "gc-1392" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:50.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1392" for this suite. • [SLOW TEST:11.111 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":6,"skipped":105,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:50.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:50.765: INFO: Got root ca configmap in namespace "svcaccounts-8077" Dec 14 09:03:50.769: INFO: Deleted root ca configmap in namespace "svcaccounts-8077" STEP: waiting for a new root ca configmap created Dec 14 09:03:51.275: INFO: Recreated root ca configmap in namespace "svcaccounts-8077" Dec 14 09:03:51.280: INFO: Updated root ca configmap in namespace "svcaccounts-8077" STEP: waiting for the root ca configmap reconciled Dec 14 09:03:51.784: INFO: Reconciled root ca configmap in namespace "svcaccounts-8077" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:51.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8077" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:16.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9412, will wait for the garbage collector to delete the pods Dec 14 09:03:20.743: INFO: Deleting Job.batch foo took: 5.960649ms Dec 14 09:03:20.844: INFO: Terminating Job.batch foo pods took: 100.674361ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:52.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9412" for this suite. • [SLOW TEST:35.415 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:36.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8246 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8246 STEP: creating replication controller externalsvc in namespace services-8246 I1214 09:03:36.898714 56 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8246, replica count: 2 I1214 09:03:39.950069 56 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Dec 14 09:03:39.972: INFO: Creating new exec pod Dec 14 09:03:49.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8246 exec execpodsckpw -- /bin/sh -x -c nslookup nodeport-service.services-8246.svc.cluster.local' Dec 14 09:03:50.291: INFO: stderr: "+ nslookup nodeport-service.services-8246.svc.cluster.local\n" Dec 14 09:03:50.291: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-8246.svc.cluster.local\tcanonical name = externalsvc.services-8246.svc.cluster.local.\nName:\texternalsvc.services-8246.svc.cluster.local\nAddress: 10.137.68.77\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8246, will wait for the garbage collector to delete the pods Dec 14 09:03:50.352: INFO: Deleting ReplicationController externalsvc took: 6.304392ms Dec 14 09:03:50.452: INFO: Terminating ReplicationController externalsvc pods took: 100.666362ms Dec 14 09:03:55.069: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:55.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8246" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:18.247 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":9,"skipped":104,"failed":0} SSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:51.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Dec 14 09:03:51.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8962 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Dec 14 09:03:51.985: INFO: stderr: "" Dec 14 09:03:51.985: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Dec 14 09:03:51.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8962 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Dec 14 09:03:52.283: INFO: stderr: "" Dec 14 09:03:52.283: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Dec 14 09:03:52.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8962 delete pods e2e-test-httpd-pod' Dec 14 09:03:56.965: INFO: stderr: "" Dec 14 09:03:56.965: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:56.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8962" for this suite. • [SLOW TEST:5.150 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:913 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":7,"skipped":108,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:55.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-7e596524-c646-403a-9435-2d765c30bc71 STEP: Creating a pod to test consume configMaps Dec 14 09:03:55.146: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d" in namespace "projected-5763" to be "Succeeded or Failed" Dec 14 09:03:55.149: INFO: Pod "pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075177ms Dec 14 09:03:57.152: INFO: Pod "pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006597752s STEP: Saw pod success Dec 14 09:03:57.153: INFO: Pod "pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d" satisfied condition "Succeeded or Failed" Dec 14 09:03:57.156: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d container agnhost-container: STEP: delete the pod Dec 14 09:03:57.168: INFO: Waiting for pod pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d to disappear Dec 14 09:03:57.171: INFO: Pod pod-projected-configmaps-a4900995-71dd-45c1-9456-4dacafcc5c4d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:57.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5763" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":107,"failed":0} SSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":6,"skipped":113,"failed":0} [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:52.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Dec 14 09:03:52.104: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint Dec 14 09:03:54.122: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint Dec 14 09:03:56.133: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:58.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-6514" for this suite. • [SLOW TEST:6.088 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":7,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:57.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-2104/configmap-test-3a3464c5-7f79-453e-a157-30c7359f7d13 STEP: Creating a pod to test consume configMaps Dec 14 09:03:57.237: INFO: Waiting up to 5m0s for pod "pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db" in namespace "configmap-2104" to be "Succeeded or Failed" Dec 14 09:03:57.240: INFO: Pod "pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596757ms Dec 14 09:03:59.244: INFO: Pod "pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007441147s STEP: Saw pod success Dec 14 09:03:59.245: INFO: Pod "pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db" satisfied condition "Succeeded or Failed" Dec 14 09:03:59.248: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db container env-test: STEP: delete the pod Dec 14 09:03:59.263: INFO: Waiting for pod pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db to disappear Dec 14 09:03:59.266: INFO: Pod pod-configmaps-18807665-8229-4a2b-9760-f41767fe11db no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:03:59.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2104" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:58.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:03:58.235: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-edc81c42-e2be-4576-b6dc-70b7b7d1637d" in namespace "security-context-test-6387" to be "Succeeded or Failed" Dec 14 09:03:58.238: INFO: Pod "busybox-readonly-false-edc81c42-e2be-4576-b6dc-70b7b7d1637d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.9055ms Dec 14 09:04:00.242: INFO: Pod "busybox-readonly-false-edc81c42-e2be-4576-b6dc-70b7b7d1637d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006579839s Dec 14 09:04:00.242: INFO: Pod "busybox-readonly-false-edc81c42-e2be-4576-b6dc-70b7b7d1637d" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:00.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6387" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:39.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:03.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4755" for this suite. • [SLOW TEST:24.247 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:59.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-jdw6c in namespace proxy-1265 I1214 09:03:59.370953 56 runners.go:190] Created replication controller with name: proxy-service-jdw6c, namespace: proxy-1265, replica count: 1 I1214 09:04:00.423368 56 runners.go:190] proxy-service-jdw6c Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:04:01.423888 56 runners.go:190] proxy-service-jdw6c Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:04:01.428: INFO: setup took 2.071319369s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 14 09:04:01.435: INFO: (0) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 6.344602ms) Dec 14 09:04:01.435: INFO: (0) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 6.880863ms) Dec 14 09:04:01.435: INFO: (0) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 6.852001ms) Dec 14 09:04:01.435: INFO: (0) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 7.131404ms) Dec 14 09:04:01.435: INFO: (0) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 6.817034ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 7.452245ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 7.293546ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 7.550556ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 7.371265ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 7.500396ms) Dec 14 09:04:01.436: INFO: (0) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 7.621077ms) Dec 14 09:04:01.444: INFO: (0) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 15.510071ms) Dec 14 09:04:01.444: INFO: (0) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 15.52173ms) Dec 14 09:04:01.444: INFO: (0) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 15.836949ms) Dec 14 09:04:01.445: INFO: (0) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 16.166026ms) Dec 14 09:04:01.445: INFO: (0) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 3.316787ms) Dec 14 09:04:01.449: INFO: (1) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 4.418667ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 4.516474ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.599577ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 4.707713ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.127741ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.263652ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.181331ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.371009ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.287419ms) Dec 14 09:04:01.450: INFO: (1) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.410927ms) Dec 14 09:04:01.451: INFO: (1) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.521673ms) Dec 14 09:04:01.451: INFO: (1) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.679547ms) Dec 14 09:04:01.451: INFO: (1) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: ... (200; 4.486313ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 4.603342ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 4.662476ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 4.720247ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 4.749416ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.968779ms) Dec 14 09:04:01.456: INFO: (2) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.056065ms) Dec 14 09:04:01.457: INFO: (2) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 5.776457ms) Dec 14 09:04:01.461: INFO: (3) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 3.733697ms) Dec 14 09:04:01.461: INFO: (3) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 3.79247ms) Dec 14 09:04:01.461: INFO: (3) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 3.879124ms) Dec 14 09:04:01.461: INFO: (3) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 4.030296ms) Dec 14 09:04:01.461: INFO: (3) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 3.957871ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 4.275287ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 4.281605ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 4.383392ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 4.749567ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 4.908336ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 4.827541ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 4.943871ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.861269ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 4.969154ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 4.876188ms) Dec 14 09:04:01.462: INFO: (3) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 4.95245ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 4.898965ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.075998ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.505813ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.482781ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.736644ms) Dec 14 09:04:01.468: INFO: (4) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.7485ms) Dec 14 09:04:01.469: INFO: (4) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 6.133113ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 3.721ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 3.903349ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 3.959893ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 3.961428ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 4.204612ms) Dec 14 09:04:01.473: INFO: (5) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 4.890596ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 4.898004ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 5.152991ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.122455ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.058984ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 4.975139ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.991231ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.028272ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.052878ms) Dec 14 09:04:01.480: INFO: (6) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.165154ms) Dec 14 09:04:01.484: INFO: (7) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 3.731596ms) Dec 14 09:04:01.484: INFO: (7) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 3.697485ms) Dec 14 09:04:01.484: INFO: (7) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 3.832937ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 4.164178ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.24254ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 4.653117ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.046266ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 4.906694ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.077886ms) Dec 14 09:04:01.485: INFO: (7) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.125796ms) Dec 14 09:04:01.486: INFO: (7) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.600292ms) Dec 14 09:04:01.486: INFO: (7) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.637361ms) Dec 14 09:04:01.486: INFO: (7) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.829457ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 4.15581ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 4.381029ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 4.352713ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 4.315191ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 4.402482ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 4.377184ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 4.550142ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 4.577419ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 4.749817ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.743858ms) Dec 14 09:04:01.491: INFO: (8) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 4.667789ms) Dec 14 09:04:01.492: INFO: (8) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.500911ms) Dec 14 09:04:01.492: INFO: (8) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.534417ms) Dec 14 09:04:01.492: INFO: (8) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.541556ms) Dec 14 09:04:01.492: INFO: (8) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 5.493931ms) Dec 14 09:04:01.496: INFO: (9) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 4.050202ms) Dec 14 09:04:01.496: INFO: (9) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 3.991605ms) Dec 14 09:04:01.496: INFO: (9) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 4.217162ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.758382ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 4.786809ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 4.849872ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 4.905909ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.035603ms) Dec 14 09:04:01.497: INFO: (9) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.083285ms) Dec 14 09:04:01.498: INFO: (9) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.384311ms) Dec 14 09:04:01.498: INFO: (9) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.501932ms) Dec 14 09:04:01.498: INFO: (9) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.534048ms) Dec 14 09:04:01.498: INFO: (9) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 5.43054ms) Dec 14 09:04:01.498: INFO: (9) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 5.527101ms) Dec 14 09:04:01.502: INFO: (10) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 3.747195ms) Dec 14 09:04:01.502: INFO: (10) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 5.513687ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.631297ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.734128ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.615435ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.707611ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.675046ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.825912ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.864182ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.900189ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 6.003845ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 6.04315ms) Dec 14 09:04:01.504: INFO: (10) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 6.116202ms) Dec 14 09:04:01.508: INFO: (11) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 3.787259ms) Dec 14 09:04:01.508: INFO: (11) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 3.92935ms) Dec 14 09:04:01.508: INFO: (11) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 3.953694ms) Dec 14 09:04:01.508: INFO: (11) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: ... (200; 5.097291ms) Dec 14 09:04:01.509: INFO: (11) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.051967ms) Dec 14 09:04:01.509: INFO: (11) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 5.20502ms) Dec 14 09:04:01.510: INFO: (11) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.131133ms) Dec 14 09:04:01.510: INFO: (11) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.590611ms) Dec 14 09:04:01.513: INFO: (12) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 3.521908ms) Dec 14 09:04:01.514: INFO: (12) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 5.003523ms) Dec 14 09:04:01.515: INFO: (12) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.374687ms) Dec 14 09:04:01.515: INFO: (12) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.420956ms) Dec 14 09:04:01.515: INFO: (12) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.497521ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.409016ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.620148ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.48223ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 5.511577ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.558856ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.726845ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.521121ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.722485ms) Dec 14 09:04:01.516: INFO: (12) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.605117ms) Dec 14 09:04:01.521: INFO: (13) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 4.993592ms) Dec 14 09:04:01.521: INFO: (13) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.31974ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.394719ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.516906ms) Dec 14 09:04:01.521: INFO: (13) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.334251ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.363772ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.355399ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.672483ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.594202ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.850869ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.742843ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.645237ms) Dec 14 09:04:01.522: INFO: (13) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: ... (200; 3.072103ms) Dec 14 09:04:01.527: INFO: (14) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 4.529751ms) Dec 14 09:04:01.527: INFO: (14) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 4.73738ms) Dec 14 09:04:01.527: INFO: (14) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 4.729539ms) Dec 14 09:04:01.527: INFO: (14) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 5.044323ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 5.065772ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.120561ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.343202ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.276481ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.28076ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.452318ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.598131ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.622238ms) Dec 14 09:04:01.528: INFO: (14) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.936472ms) Dec 14 09:04:01.529: INFO: (14) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 4.703647ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 5.040185ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.087126ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.034933ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.099483ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 5.043213ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.091787ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.067352ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 5.256301ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 5.165368ms) Dec 14 09:04:01.534: INFO: (15) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: ... (200; 5.828059ms) Dec 14 09:04:01.540: INFO: (16) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.896714ms) Dec 14 09:04:01.540: INFO: (16) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.979077ms) Dec 14 09:04:01.541: INFO: (16) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 6.485749ms) Dec 14 09:04:01.541: INFO: (16) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 6.48805ms) Dec 14 09:04:01.541: INFO: (16) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 6.571126ms) Dec 14 09:04:01.541: INFO: (16) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 6.756469ms) Dec 14 09:04:01.542: INFO: (16) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname1/proxy/: tls baz (200; 7.370899ms) Dec 14 09:04:01.542: INFO: (16) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 7.411393ms) Dec 14 09:04:01.542: INFO: (16) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 7.310129ms) Dec 14 09:04:01.542: INFO: (16) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 5.62062ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 5.633521ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 5.646467ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.705137ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname1/proxy/: foo (200; 5.75151ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:462/proxy/: tls qux (200; 5.697001ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 5.899191ms) Dec 14 09:04:01.548: INFO: (17) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: ... (200; 5.678337ms) Dec 14 09:04:01.554: INFO: (18) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 5.893837ms) Dec 14 09:04:01.554: INFO: (18) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l/proxy/: test (200; 5.808449ms) Dec 14 09:04:01.554: INFO: (18) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:1080/proxy/: test<... (200; 6.005912ms) Dec 14 09:04:01.554: INFO: (18) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test (200; 4.398969ms) Dec 14 09:04:01.559: INFO: (19) /api/v1/namespaces/proxy-1265/pods/proxy-service-jdw6c-m4s8l:160/proxy/: foo (200; 4.443682ms) Dec 14 09:04:01.559: INFO: (19) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname1/proxy/: foo (200; 4.631742ms) Dec 14 09:04:01.559: INFO: (19) /api/v1/namespaces/proxy-1265/services/https:proxy-service-jdw6c:tlsportname2/proxy/: tls qux (200; 4.582296ms) Dec 14 09:04:01.559: INFO: (19) /api/v1/namespaces/proxy-1265/services/http:proxy-service-jdw6c:portname2/proxy/: bar (200; 4.608204ms) Dec 14 09:04:01.559: INFO: (19) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:1080/proxy/: ... (200; 4.570914ms) Dec 14 09:04:01.560: INFO: (19) /api/v1/namespaces/proxy-1265/pods/http:proxy-service-jdw6c-m4s8l:162/proxy/: bar (200; 4.999318ms) Dec 14 09:04:01.560: INFO: (19) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:443/proxy/: test<... (200; 5.521725ms) Dec 14 09:04:01.560: INFO: (19) /api/v1/namespaces/proxy-1265/pods/https:proxy-service-jdw6c-m4s8l:460/proxy/: tls baz (200; 5.621736ms) Dec 14 09:04:01.560: INFO: (19) /api/v1/namespaces/proxy-1265/services/proxy-service-jdw6c:portname2/proxy/: bar (200; 5.557307ms) STEP: deleting ReplicationController proxy-service-jdw6c in namespace proxy-1265, will wait for the garbage collector to delete the pods Dec 14 09:04:01.621: INFO: Deleting ReplicationController proxy-service-jdw6c took: 6.38484ms Dec 14 09:04:01.722: INFO: Terminating ReplicationController proxy-service-jdw6c pods took: 100.981343ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:04.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1265" for this suite. • [SLOW TEST:5.016 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":12,"skipped":131,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:00.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:04.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7867" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":9,"skipped":143,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:04.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-7df678dc-3fd2-437f-acee-d2df0691b655 STEP: Creating a pod to test consume secrets Dec 14 09:04:04.418: INFO: Waiting up to 5m0s for pod "pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a" in namespace "secrets-4696" to be "Succeeded or Failed" Dec 14 09:04:04.421: INFO: Pod "pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.899165ms Dec 14 09:04:06.425: INFO: Pod "pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006874514s STEP: Saw pod success Dec 14 09:04:06.425: INFO: Pod "pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a" satisfied condition "Succeeded or Failed" Dec 14 09:04:06.428: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a container secret-volume-test: STEP: delete the pod Dec 14 09:04:06.443: INFO: Waiting for pod pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a to disappear Dec 14 09:04:06.446: INFO: Pod pod-secrets-30732254-5a61-4a59-9d10-6d311a161e1a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:06.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4696" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:32.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5032 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5032 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5032.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5032.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.114.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.114.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.114.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.114.14_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5032 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5032;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5032 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5032;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5032.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5032.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5032.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5032.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5032.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5032.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5032.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.114.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.114.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.114.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.114.14_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:03:36.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.361: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.365: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.368: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.373: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.409: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.412: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.416: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.422: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.427: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.430: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:36.447: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:03:41.453: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.457: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.479: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.508: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.511: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.514: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.518: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.521: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.528: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.531: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:41.551: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:03:46.453: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.457: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.460: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.477: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.481: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.507: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.510: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.514: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.518: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.522: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.529: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:46.558: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:03:51.454: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.458: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.507: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.511: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.515: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.523: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.526: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.530: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.533: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:51.553: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:03:56.452: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.456: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.460: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.464: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.467: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.472: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.480: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.509: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.514: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.518: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.526: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.530: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.535: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.539: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:03:56.563: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:04:01.451: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.457: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.475: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.518: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.522: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.525: INFO: Unable to read jessie_udp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.528: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032 from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.534: INFO: Unable to read jessie_udp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.538: INFO: Unable to read jessie_tcp@dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.541: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.544: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc from pod dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3: the server could not find the requested resource (get pods dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3) Dec 14 09:04:01.566: INFO: Lookups using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5032 wheezy_tcp@dns-test-service.dns-5032 wheezy_udp@dns-test-service.dns-5032.svc wheezy_tcp@dns-test-service.dns-5032.svc wheezy_udp@_http._tcp.dns-test-service.dns-5032.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5032.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5032 jessie_tcp@dns-test-service.dns-5032 jessie_udp@dns-test-service.dns-5032.svc jessie_tcp@dns-test-service.dns-5032.svc jessie_udp@_http._tcp.dns-test-service.dns-5032.svc jessie_tcp@_http._tcp.dns-test-service.dns-5032.svc] Dec 14 09:04:06.552: INFO: DNS probes using dns-5032/dns-test-f1fdd576-3f1b-4acf-9282-63c3915153a3 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:06.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5032" for this suite. • [SLOW TEST:34.298 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":162,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:06.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Dec 14 09:04:06.667: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Dec 14 09:04:06.672: INFO: starting watch STEP: patching STEP: updating Dec 14 09:04:06.684: INFO: waiting for watch events with expected annotations Dec 14 09:04:06.684: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:06.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2673" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":10,"skipped":172,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:03.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-1262e7bf-0768-46a2-ae76-4ab061f0e3e8 STEP: Creating a pod to test consume secrets Dec 14 09:04:03.601: INFO: Waiting up to 5m0s for pod "pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda" in namespace "secrets-3688" to be "Succeeded or Failed" Dec 14 09:04:03.605: INFO: Pod "pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda": Phase="Pending", Reason="", readiness=false. Elapsed: 3.453794ms Dec 14 09:04:05.609: INFO: Pod "pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007822918s Dec 14 09:04:07.614: INFO: Pod "pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012795305s STEP: Saw pod success Dec 14 09:04:07.614: INFO: Pod "pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda" satisfied condition "Succeeded or Failed" Dec 14 09:04:07.617: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda container secret-volume-test: STEP: delete the pod Dec 14 09:04:07.631: INFO: Waiting for pod pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda to disappear Dec 14 09:04:07.634: INFO: Pod pod-secrets-68af1ac1-402c-45be-88b7-5ae70596acda no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3688" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:56.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics Dec 14 09:04:07.049: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:04:07.898: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:07.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8104" for this suite. • [SLOW TEST:10.929 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":8,"skipped":109,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":68,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:07.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Dec 14 09:04:07.681: INFO: Waiting up to 5m0s for pod "var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c" in namespace "var-expansion-6845" to be "Succeeded or Failed" Dec 14 09:04:07.684: INFO: Pod "var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645692ms Dec 14 09:04:09.689: INFO: Pod "var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007762177s STEP: Saw pod success Dec 14 09:04:09.689: INFO: Pod "var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c" satisfied condition "Succeeded or Failed" Dec 14 09:04:09.693: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c container dapi-container: STEP: delete the pod Dec 14 09:04:09.708: INFO: Waiting for pod var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c to disappear Dec 14 09:04:09.712: INFO: Pod var-expansion-f0029986-4300-42aa-8a19-0d0501f8f71c no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:09.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6845" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:04.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-72b6a541-44b5-49be-9a69-beb557c0ae48 STEP: Creating a pod to test consume secrets Dec 14 09:04:04.413: INFO: Waiting up to 5m0s for pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee" in namespace "secrets-1977" to be "Succeeded or Failed" Dec 14 09:04:04.416: INFO: Pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470953ms Dec 14 09:04:06.420: INFO: Pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007100756s Dec 14 09:04:08.425: INFO: Pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee": Phase="Running", Reason="", readiness=true. Elapsed: 4.011964824s Dec 14 09:04:10.430: INFO: Pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016783735s STEP: Saw pod success Dec 14 09:04:10.430: INFO: Pod "pod-secrets-9cdc03d8-a454-4174-835b-697f544accee" satisfied condition "Succeeded or Failed" Dec 14 09:04:10.433: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-secrets-9cdc03d8-a454-4174-835b-697f544accee container secret-volume-test: STEP: delete the pod Dec 14 09:04:10.449: INFO: Waiting for pod pod-secrets-9cdc03d8-a454-4174-835b-697f544accee to disappear Dec 14 09:04:10.452: INFO: Pod pod-secrets-9cdc03d8-a454-4174-835b-697f544accee no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:10.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1977" for this suite. • [SLOW TEST:6.107 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:06.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 14 09:04:06.548: INFO: Waiting up to 5m0s for pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712" in namespace "emptydir-3227" to be "Succeeded or Failed" Dec 14 09:04:06.551: INFO: Pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767255ms Dec 14 09:04:08.556: INFO: Pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007642006s Dec 14 09:04:10.562: INFO: Pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013980732s Dec 14 09:04:12.565: INFO: Pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0175755s STEP: Saw pod success Dec 14 09:04:12.566: INFO: Pod "pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712" satisfied condition "Succeeded or Failed" Dec 14 09:04:12.569: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712 container test-container: STEP: delete the pod Dec 14 09:04:12.582: INFO: Waiting for pod pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712 to disappear Dec 14 09:04:12.584: INFO: Pod pod-fe44ac90-44e1-4a4f-afdd-9baa6edc2712 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:12.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3227" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":165,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:50.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-kz44 STEP: Creating a pod to test atomic-volume-subpath Dec 14 09:03:50.896: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kz44" in namespace "subpath-1512" to be "Succeeded or Failed" Dec 14 09:03:50.899: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.5953ms Dec 14 09:03:52.904: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007860469s Dec 14 09:03:54.909: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 4.013470466s Dec 14 09:03:56.913: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 6.017257297s Dec 14 09:03:58.918: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 8.022299115s Dec 14 09:04:00.923: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 10.02670511s Dec 14 09:04:02.929: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 12.032862755s Dec 14 09:04:04.935: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 14.03870389s Dec 14 09:04:06.939: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 16.043044566s Dec 14 09:04:08.945: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 18.048819249s Dec 14 09:04:10.950: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Running", Reason="", readiness=true. Elapsed: 20.053752709s Dec 14 09:04:12.954: INFO: Pod "pod-subpath-test-downwardapi-kz44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.057939627s STEP: Saw pod success Dec 14 09:04:12.954: INFO: Pod "pod-subpath-test-downwardapi-kz44" satisfied condition "Succeeded or Failed" Dec 14 09:04:12.957: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-subpath-test-downwardapi-kz44 container test-container-subpath-downwardapi-kz44: STEP: delete the pod Dec 14 09:04:12.970: INFO: Waiting for pod pod-subpath-test-downwardapi-kz44 to disappear Dec 14 09:04:12.973: INFO: Pod pod-subpath-test-downwardapi-kz44 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kz44 Dec 14 09:04:12.973: INFO: Deleting pod "pod-subpath-test-downwardapi-kz44" in namespace "subpath-1512" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1512" for this suite. • [SLOW TEST:22.137 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:06.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Dec 14 09:04:06.785: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Dec 14 09:04:07.514: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Dec 14 09:04:14.760: INFO: Waited 5.206889907s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Dec 14 09:04:14.822: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:15.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6256" for this suite. • [SLOW TEST:8.601 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:15.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7840" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:15.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Dec 14 09:04:15.640: INFO: Major version: 1 STEP: Confirm minor version Dec 14 09:04:15.640: INFO: cleanMinorVersion: 22 Dec 14 09:04:15.640: INFO: Minor version: 22 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:15.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-1097" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":13,"skipped":250,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:12.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:12.634: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:15.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4319" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":15,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:15.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:04:15.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64" in namespace "downward-api-866" to be "Succeeded or Failed" Dec 14 09:04:15.686: INFO: Pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688305ms Dec 14 09:04:17.690: INFO: Pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006392837s Dec 14 09:04:19.695: INFO: Pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011384233s Dec 14 09:04:21.700: INFO: Pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016581559s STEP: Saw pod success Dec 14 09:04:21.700: INFO: Pod "downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64" satisfied condition "Succeeded or Failed" Dec 14 09:04:21.704: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64 container client-container: STEP: delete the pod Dec 14 09:04:21.719: INFO: Waiting for pod downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64 to disappear Dec 14 09:04:21.721: INFO: Pod downwardapi-volume-ec2e60bf-71e0-4277-87b6-2f4435286c64 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:21.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-866" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":251,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:47.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Dec 14 09:03:47.287: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Dec 14 09:04:02.089: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:04:05.886: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:22.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6698" for this suite. • [SLOW TEST:34.761 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":154,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:10.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:22.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1789" for this suite. • [SLOW TEST:12.047 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":11,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:22.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Dec 14 09:04:22.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8271 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:22.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8271" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":12,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:07.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Dec 14 09:04:07.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 create -f -' Dec 14 09:04:08.243: INFO: stderr: "" Dec 14 09:04:08.243: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 14 09:04:08.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:04:08.357: INFO: stderr: "" Dec 14 09:04:08.357: INFO: stdout: "update-demo-nautilus-47fs2 update-demo-nautilus-ktfm2 " Dec 14 09:04:08.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:08.459: INFO: stderr: "" Dec 14 09:04:08.459: INFO: stdout: "" Dec 14 09:04:08.459: INFO: update-demo-nautilus-47fs2 is created but not running Dec 14 09:04:13.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:04:13.576: INFO: stderr: "" Dec 14 09:04:13.576: INFO: stdout: "update-demo-nautilus-47fs2 update-demo-nautilus-ktfm2 " Dec 14 09:04:13.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:13.674: INFO: stderr: "" Dec 14 09:04:13.675: INFO: stdout: "true" Dec 14 09:04:13.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:13.778: INFO: stderr: "" Dec 14 09:04:13.778: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:13.778: INFO: validating pod update-demo-nautilus-47fs2 Dec 14 09:04:13.787: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:13.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:13.787: INFO: update-demo-nautilus-47fs2 is verified up and running Dec 14 09:04:13.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-ktfm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:13.895: INFO: stderr: "" Dec 14 09:04:13.895: INFO: stdout: "true" Dec 14 09:04:13.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-ktfm2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:14.006: INFO: stderr: "" Dec 14 09:04:14.006: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:14.006: INFO: validating pod update-demo-nautilus-ktfm2 Dec 14 09:04:14.011: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:14.011: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:14.011: INFO: update-demo-nautilus-ktfm2 is verified up and running STEP: scaling down the replication controller Dec 14 09:04:14.021: INFO: scanned /root for discovery docs: Dec 14 09:04:14.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Dec 14 09:04:15.158: INFO: stderr: "" Dec 14 09:04:15.158: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 14 09:04:15.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:04:15.261: INFO: stderr: "" Dec 14 09:04:15.261: INFO: stdout: "update-demo-nautilus-47fs2 " Dec 14 09:04:15.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:15.368: INFO: stderr: "" Dec 14 09:04:15.368: INFO: stdout: "true" Dec 14 09:04:15.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:15.478: INFO: stderr: "" Dec 14 09:04:15.479: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:15.479: INFO: validating pod update-demo-nautilus-47fs2 Dec 14 09:04:15.482: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:15.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:15.482: INFO: update-demo-nautilus-47fs2 is verified up and running STEP: scaling up the replication controller Dec 14 09:04:15.490: INFO: scanned /root for discovery docs: Dec 14 09:04:15.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Dec 14 09:04:16.623: INFO: stderr: "" Dec 14 09:04:16.623: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 14 09:04:16.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:04:16.741: INFO: stderr: "" Dec 14 09:04:16.741: INFO: stdout: "update-demo-nautilus-47fs2 update-demo-nautilus-8pqgn " Dec 14 09:04:16.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:16.847: INFO: stderr: "" Dec 14 09:04:16.847: INFO: stdout: "true" Dec 14 09:04:16.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:16.955: INFO: stderr: "" Dec 14 09:04:16.955: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:16.955: INFO: validating pod update-demo-nautilus-47fs2 Dec 14 09:04:16.960: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:16.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:16.960: INFO: update-demo-nautilus-47fs2 is verified up and running Dec 14 09:04:16.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-8pqgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:17.068: INFO: stderr: "" Dec 14 09:04:17.068: INFO: stdout: "" Dec 14 09:04:17.068: INFO: update-demo-nautilus-8pqgn is created but not running Dec 14 09:04:22.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:04:22.206: INFO: stderr: "" Dec 14 09:04:22.206: INFO: stdout: "update-demo-nautilus-47fs2 update-demo-nautilus-8pqgn " Dec 14 09:04:22.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:22.320: INFO: stderr: "" Dec 14 09:04:22.320: INFO: stdout: "true" Dec 14 09:04:22.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-47fs2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:22.427: INFO: stderr: "" Dec 14 09:04:22.427: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:22.427: INFO: validating pod update-demo-nautilus-47fs2 Dec 14 09:04:22.432: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:22.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:22.432: INFO: update-demo-nautilus-47fs2 is verified up and running Dec 14 09:04:22.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-8pqgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:04:22.544: INFO: stderr: "" Dec 14 09:04:22.544: INFO: stdout: "true" Dec 14 09:04:22.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods update-demo-nautilus-8pqgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:04:22.653: INFO: stderr: "" Dec 14 09:04:22.653: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:04:22.653: INFO: validating pod update-demo-nautilus-8pqgn Dec 14 09:04:22.664: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:04:22.664: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:04:22.664: INFO: update-demo-nautilus-8pqgn is verified up and running STEP: using delete to clean up resources Dec 14 09:04:22.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 delete --grace-period=0 --force -f -' Dec 14 09:04:22.768: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:04:22.768: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 14 09:04:22.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get rc,svc -l name=update-demo --no-headers' Dec 14 09:04:22.886: INFO: stderr: "No resources found in kubectl-3082 namespace.\n" Dec 14 09:04:22.886: INFO: stdout: "" Dec 14 09:04:22.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3082 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 14 09:04:22.996: INFO: stderr: "" Dec 14 09:04:22.996: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:22.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3082" for this suite. • [SLOW TEST:15.089 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":9,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:22.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-8a6d8a0b-cc36-406d-b0b5-c174ad0a58ac STEP: Creating a pod to test consume configMaps Dec 14 09:04:22.107: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472" in namespace "configmap-1672" to be "Succeeded or Failed" Dec 14 09:04:22.111: INFO: Pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472": Phase="Pending", Reason="", readiness=false. Elapsed: 3.463963ms Dec 14 09:04:24.115: INFO: Pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007907213s Dec 14 09:04:26.121: INFO: Pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013470449s Dec 14 09:04:28.125: INFO: Pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017478459s STEP: Saw pod success Dec 14 09:04:28.125: INFO: Pod "pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472" satisfied condition "Succeeded or Failed" Dec 14 09:04:28.128: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472 container agnhost-container: STEP: delete the pod Dec 14 09:04:28.143: INFO: Waiting for pod pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472 to disappear Dec 14 09:04:28.146: INFO: Pod pod-configmaps-3b9042c3-8587-4b99-ac37-278959a9c472 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:28.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1672" for this suite. • [SLOW TEST:6.094 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:15.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Dec 14 09:04:16.623: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:04:16.640: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:04:18.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:04:20.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:04:22.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069456, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:04:25.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:25.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9705-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3531" for this suite. STEP: Destroying namespace "webhook-3531-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.116 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":16,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:22.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Dec 14 09:04:22.934: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:30.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6968" for this suite. • [SLOW TEST:7.206 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:13.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:04:13.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:04:15.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:04:17.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:04:19.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069453, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:04:22.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Dec 14 09:04:30.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-6564 attach --namespace=webhook-6564 to-be-attached-pod -i -c=container1' Dec 14 09:04:30.611: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:30.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6564" for this suite. STEP: Destroying namespace "webhook-6564-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.628 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:09.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Dec 14 09:04:09.776: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.777: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.781: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.781: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.791: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.791: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.804: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:09.804: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 and labels map[test-deployment-static:true] Dec 14 09:04:13.344: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment-static:true] Dec 14 09:04:13.344: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment-static:true] Dec 14 09:04:14.147: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Dec 14 09:04:14.153: INFO: observed event type ADDED STEP: waiting for Replicas to scale Dec 14 09:04:14.155: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.155: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.155: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.155: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 0 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.156: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.159: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.159: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.167: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.167: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:14.177: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:14.177: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:14.183: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:14.183: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:22.757: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:22.757: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:22.771: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 STEP: listing Deployments Dec 14 09:04:22.775: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Dec 14 09:04:22.786: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Dec 14 09:04:22.793: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:22.794: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:22.800: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:22.810: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:22.820: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:28.957: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:29.357: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:29.371: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:29.381: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Dec 14 09:04:31.142: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Dec 14 09:04:31.170: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:31.170: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 1 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 3 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 2 Dec 14 09:04:31.171: INFO: observed Deployment test-deployment in namespace deployment-8679 with ReadyReplicas 3 STEP: deleting the Deployment Dec 14 09:04:31.179: INFO: observed event type MODIFIED Dec 14 09:04:31.179: INFO: observed event type MODIFIED Dec 14 09:04:31.179: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED Dec 14 09:04:31.180: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:04:31.184: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:31.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8679" for this suite. • [SLOW TEST:21.464 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:31.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 14 09:04:33.413: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:33.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5006" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":145,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:28.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:29.031: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 14 09:04:34.036: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 14 09:04:34.036: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:04:34.053: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4727 21282007-1a06-4420-881e-f2b8a3519935 13946338 1 2021-12-14 09:04:34 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-12-14 09:04:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004fbc9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Dec 14 09:04:34.056: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-4727 68bb2362-f250-4ceb-b932-47a0f3ae52df 13946341 1 2021-12-14 09:04:34 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 21282007-1a06-4420-881e-f2b8a3519935 0xc004d173e7 0xc004d173e8}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:04:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21282007-1a06-4420-881e-f2b8a3519935\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d17478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:04:34.056: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Dec 14 09:04:34.056: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4727 999d0c10-5feb-47dd-88a7-b064cff8e8d4 13946339 1 2021-12-14 09:04:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 21282007-1a06-4420-881e-f2b8a3519935 0xc004d172b7 0xc004d172b8}] [] [{e2e.test Update apps/v1 2021-12-14 09:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:04:30 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2021-12-14 09:04:34 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"21282007-1a06-4420-881e-f2b8a3519935\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d17378 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:04:34.060: INFO: Pod "test-cleanup-controller-xbvf9" is available: &Pod{ObjectMeta:{test-cleanup-controller-xbvf9 test-cleanup-controller- deployment-4727 c98f6867-aff9-4ab0-839b-144d31717ca4 13945888 0 2021-12-14 09:04:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 999d0c10-5feb-47dd-88a7-b064cff8e8d4 0xc004fbcce7 0xc004fbcce8}] [] [{kube-controller-manager Update v1 2021-12-14 09:04:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"999d0c10-5feb-47dd-88a7-b064cff8e8d4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:04:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.228\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2rbkw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rbkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.228,StartTime:2021-12-14 09:04:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:04:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://031cb3c4a9c4b1b28438b7ac3cbdaec9493c9b58d5fbef8d62f1814d2ed52b98,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:04:34.061: INFO: Pod "test-cleanup-deployment-5b4d99b59b-slvgk" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-slvgk test-cleanup-deployment-5b4d99b59b- deployment-4727 e74b5212-2ec8-408e-b106-62caf54dc6e0 13946344 0 2021-12-14 09:04:34 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 68bb2362-f250-4ceb-b932-47a0f3ae52df 0xc004fbced7 0xc004fbced8}] [] [{kube-controller-manager Update v1 2021-12-14 09:04:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"68bb2362-f250-4ceb-b932-47a0f3ae52df\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zhxr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhxr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:34.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4727" for this suite. • [SLOW TEST:5.076 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":17,"skipped":206,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:23.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:23.140: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-311 I1214 09:04:23.159346 23 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-311, replica count: 1 I1214 09:04:24.210202 23 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:04:25.210838 23 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:04:26.211752 23 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:04:26.327: INFO: Created: latency-svc-vg2js Dec 14 09:04:26.336: INFO: Got endpoints: latency-svc-vg2js [23.37202ms] Dec 14 09:04:26.347: INFO: Created: latency-svc-f72zk Dec 14 09:04:26.350: INFO: Got endpoints: latency-svc-f72zk [14.545973ms] Dec 14 09:04:26.353: INFO: Created: latency-svc-f8f9s Dec 14 09:04:26.356: INFO: Got endpoints: latency-svc-f8f9s [20.02725ms] Dec 14 09:04:26.363: INFO: Created: latency-svc-wq48j Dec 14 09:04:26.367: INFO: Got endpoints: latency-svc-wq48j [30.438629ms] Dec 14 09:04:26.369: INFO: Created: latency-svc-x2w4m Dec 14 09:04:26.371: INFO: Got endpoints: latency-svc-x2w4m [35.207347ms] Dec 14 09:04:26.374: INFO: Created: latency-svc-v479t Dec 14 09:04:26.376: INFO: Got endpoints: latency-svc-v479t [40.412271ms] Dec 14 09:04:26.379: INFO: Created: latency-svc-xdw2s Dec 14 09:04:26.381: INFO: Got endpoints: latency-svc-xdw2s [44.924784ms] Dec 14 09:04:26.383: INFO: Created: latency-svc-6hq49 Dec 14 09:04:26.390: INFO: Got endpoints: latency-svc-6hq49 [53.692735ms] Dec 14 09:04:26.392: INFO: Created: latency-svc-n28ts Dec 14 09:04:26.395: INFO: Got endpoints: latency-svc-n28ts [58.645557ms] Dec 14 09:04:26.399: INFO: Created: latency-svc-rxn58 Dec 14 09:04:26.402: INFO: Got endpoints: latency-svc-rxn58 [66.050376ms] Dec 14 09:04:26.405: INFO: Created: latency-svc-qwk5m Dec 14 09:04:26.408: INFO: Got endpoints: latency-svc-qwk5m [72.000565ms] Dec 14 09:04:26.413: INFO: Created: latency-svc-l4lg8 Dec 14 09:04:26.415: INFO: Got endpoints: latency-svc-l4lg8 [79.2343ms] Dec 14 09:04:26.420: INFO: Created: latency-svc-dk2rp Dec 14 09:04:26.422: INFO: Got endpoints: latency-svc-dk2rp [86.126753ms] Dec 14 09:04:26.428: INFO: Created: latency-svc-gh96w Dec 14 09:04:26.431: INFO: Got endpoints: latency-svc-gh96w [94.584109ms] Dec 14 09:04:26.436: INFO: Created: latency-svc-fdrdc Dec 14 09:04:26.440: INFO: Got endpoints: latency-svc-fdrdc [103.629834ms] Dec 14 09:04:26.444: INFO: Created: latency-svc-bgznj Dec 14 09:04:26.446: INFO: Got endpoints: latency-svc-bgznj [109.861422ms] Dec 14 09:04:26.451: INFO: Created: latency-svc-twbbx Dec 14 09:04:26.454: INFO: Got endpoints: latency-svc-twbbx [103.106253ms] Dec 14 09:04:26.457: INFO: Created: latency-svc-gjgqt Dec 14 09:04:26.460: INFO: Got endpoints: latency-svc-gjgqt [104.201752ms] Dec 14 09:04:26.465: INFO: Created: latency-svc-pf7c5 Dec 14 09:04:26.467: INFO: Got endpoints: latency-svc-pf7c5 [100.26781ms] Dec 14 09:04:26.471: INFO: Created: latency-svc-c25kj Dec 14 09:04:26.474: INFO: Got endpoints: latency-svc-c25kj [102.536133ms] Dec 14 09:04:26.482: INFO: Created: latency-svc-p9fzx Dec 14 09:04:26.485: INFO: Got endpoints: latency-svc-p9fzx [108.854581ms] Dec 14 09:04:26.488: INFO: Created: latency-svc-f87jm Dec 14 09:04:26.499: INFO: Got endpoints: latency-svc-f87jm [117.844629ms] Dec 14 09:04:26.501: INFO: Created: latency-svc-8lg4n Dec 14 09:04:26.503: INFO: Got endpoints: latency-svc-8lg4n [113.101097ms] Dec 14 09:04:26.506: INFO: Created: latency-svc-bxmp2 Dec 14 09:04:26.509: INFO: Got endpoints: latency-svc-bxmp2 [113.899001ms] Dec 14 09:04:26.514: INFO: Created: latency-svc-z26wz Dec 14 09:04:26.518: INFO: Got endpoints: latency-svc-z26wz [115.272641ms] Dec 14 09:04:26.522: INFO: Created: latency-svc-2js4t Dec 14 09:04:26.526: INFO: Got endpoints: latency-svc-2js4t [117.621171ms] Dec 14 09:04:26.529: INFO: Created: latency-svc-8nlbp Dec 14 09:04:26.532: INFO: Got endpoints: latency-svc-8nlbp [116.029045ms] Dec 14 09:04:26.537: INFO: Created: latency-svc-t6bq6 Dec 14 09:04:26.540: INFO: Got endpoints: latency-svc-t6bq6 [117.525293ms] Dec 14 09:04:26.545: INFO: Created: latency-svc-kpvbp Dec 14 09:04:26.547: INFO: Got endpoints: latency-svc-kpvbp [115.907671ms] Dec 14 09:04:26.551: INFO: Created: latency-svc-rr9t7 Dec 14 09:04:26.555: INFO: Got endpoints: latency-svc-rr9t7 [114.885359ms] Dec 14 09:04:26.559: INFO: Created: latency-svc-m7sqr Dec 14 09:04:26.562: INFO: Got endpoints: latency-svc-m7sqr [115.564485ms] Dec 14 09:04:26.565: INFO: Created: latency-svc-rvrg5 Dec 14 09:04:26.568: INFO: Got endpoints: latency-svc-rvrg5 [114.059819ms] Dec 14 09:04:26.573: INFO: Created: latency-svc-khjsd Dec 14 09:04:26.582: INFO: Got endpoints: latency-svc-khjsd [121.743672ms] Dec 14 09:04:26.594: INFO: Created: latency-svc-k89gk Dec 14 09:04:26.597: INFO: Got endpoints: latency-svc-k89gk [129.799818ms] Dec 14 09:04:26.611: INFO: Created: latency-svc-trwth Dec 14 09:04:26.613: INFO: Got endpoints: latency-svc-trwth [139.296074ms] Dec 14 09:04:26.617: INFO: Created: latency-svc-7r8cx Dec 14 09:04:26.628: INFO: Created: latency-svc-cvt25 Dec 14 09:04:26.630: INFO: Got endpoints: latency-svc-7r8cx [144.903401ms] Dec 14 09:04:26.634: INFO: Created: latency-svc-79cck Dec 14 09:04:26.639: INFO: Created: latency-svc-5p48v Dec 14 09:04:26.645: INFO: Created: latency-svc-7smk7 Dec 14 09:04:26.651: INFO: Created: latency-svc-nrqh5 Dec 14 09:04:26.657: INFO: Created: latency-svc-926rk Dec 14 09:04:26.663: INFO: Created: latency-svc-2cfv7 Dec 14 09:04:26.669: INFO: Created: latency-svc-zpckh Dec 14 09:04:26.676: INFO: Created: latency-svc-g4p4p Dec 14 09:04:26.682: INFO: Got endpoints: latency-svc-cvt25 [183.020742ms] Dec 14 09:04:26.683: INFO: Created: latency-svc-2xnpb Dec 14 09:04:26.690: INFO: Created: latency-svc-9gjzl Dec 14 09:04:26.696: INFO: Created: latency-svc-dzwd7 Dec 14 09:04:26.701: INFO: Created: latency-svc-jn9sg Dec 14 09:04:26.706: INFO: Created: latency-svc-w5dcp Dec 14 09:04:26.718: INFO: Created: latency-svc-r6hgw Dec 14 09:04:26.726: INFO: Created: latency-svc-njblk Dec 14 09:04:26.731: INFO: Got endpoints: latency-svc-79cck [228.253664ms] Dec 14 09:04:26.742: INFO: Created: latency-svc-4hctx Dec 14 09:04:26.781: INFO: Got endpoints: latency-svc-5p48v [272.422642ms] Dec 14 09:04:26.792: INFO: Created: latency-svc-4scbg Dec 14 09:04:26.831: INFO: Got endpoints: latency-svc-7smk7 [313.664642ms] Dec 14 09:04:26.848: INFO: Created: latency-svc-dkgcj Dec 14 09:04:26.880: INFO: Got endpoints: latency-svc-nrqh5 [354.476161ms] Dec 14 09:04:26.891: INFO: Created: latency-svc-9kqtm Dec 14 09:04:26.930: INFO: Got endpoints: latency-svc-926rk [398.79349ms] Dec 14 09:04:26.941: INFO: Created: latency-svc-4bvll Dec 14 09:04:26.981: INFO: Got endpoints: latency-svc-2cfv7 [440.883209ms] Dec 14 09:04:26.993: INFO: Created: latency-svc-vqpbx Dec 14 09:04:27.030: INFO: Got endpoints: latency-svc-zpckh [483.418394ms] Dec 14 09:04:27.040: INFO: Created: latency-svc-5xhgc Dec 14 09:04:27.081: INFO: Got endpoints: latency-svc-g4p4p [526.012797ms] Dec 14 09:04:27.092: INFO: Created: latency-svc-45bpq Dec 14 09:04:27.130: INFO: Got endpoints: latency-svc-2xnpb [568.511475ms] Dec 14 09:04:27.142: INFO: Created: latency-svc-k9dxn Dec 14 09:04:27.181: INFO: Got endpoints: latency-svc-9gjzl [612.918715ms] Dec 14 09:04:27.192: INFO: Created: latency-svc-6bw7d Dec 14 09:04:27.233: INFO: Got endpoints: latency-svc-dzwd7 [650.951141ms] Dec 14 09:04:27.245: INFO: Created: latency-svc-qzvk5 Dec 14 09:04:27.281: INFO: Got endpoints: latency-svc-jn9sg [683.650839ms] Dec 14 09:04:27.295: INFO: Created: latency-svc-w2ld6 Dec 14 09:04:27.339: INFO: Got endpoints: latency-svc-w5dcp [725.449719ms] Dec 14 09:04:27.355: INFO: Created: latency-svc-lqlb4 Dec 14 09:04:27.381: INFO: Got endpoints: latency-svc-r6hgw [750.303795ms] Dec 14 09:04:27.392: INFO: Created: latency-svc-gx86p Dec 14 09:04:27.431: INFO: Got endpoints: latency-svc-njblk [748.908703ms] Dec 14 09:04:27.443: INFO: Created: latency-svc-957vd Dec 14 09:04:27.481: INFO: Got endpoints: latency-svc-4hctx [750.120861ms] Dec 14 09:04:27.493: INFO: Created: latency-svc-vl5qz Dec 14 09:04:27.531: INFO: Got endpoints: latency-svc-4scbg [750.046346ms] Dec 14 09:04:27.545: INFO: Created: latency-svc-mlmdp Dec 14 09:04:27.582: INFO: Got endpoints: latency-svc-dkgcj [750.952561ms] Dec 14 09:04:27.593: INFO: Created: latency-svc-qn8kt Dec 14 09:04:27.630: INFO: Got endpoints: latency-svc-9kqtm [749.596054ms] Dec 14 09:04:27.643: INFO: Created: latency-svc-qg6vv Dec 14 09:04:27.680: INFO: Got endpoints: latency-svc-4bvll [749.650512ms] Dec 14 09:04:27.691: INFO: Created: latency-svc-thvkf Dec 14 09:04:27.731: INFO: Got endpoints: latency-svc-vqpbx [749.493127ms] Dec 14 09:04:27.742: INFO: Created: latency-svc-v754n Dec 14 09:04:27.780: INFO: Got endpoints: latency-svc-5xhgc [749.849416ms] Dec 14 09:04:27.791: INFO: Created: latency-svc-8xrth Dec 14 09:04:27.830: INFO: Got endpoints: latency-svc-45bpq [748.754163ms] Dec 14 09:04:27.838: INFO: Created: latency-svc-rx4pc Dec 14 09:04:27.881: INFO: Got endpoints: latency-svc-k9dxn [750.333745ms] Dec 14 09:04:27.891: INFO: Created: latency-svc-mwrzj Dec 14 09:04:27.930: INFO: Got endpoints: latency-svc-6bw7d [749.242861ms] Dec 14 09:04:27.940: INFO: Created: latency-svc-xdmxg Dec 14 09:04:27.980: INFO: Got endpoints: latency-svc-qzvk5 [746.928147ms] Dec 14 09:04:27.991: INFO: Created: latency-svc-hvv59 Dec 14 09:04:28.031: INFO: Got endpoints: latency-svc-w2ld6 [750.022666ms] Dec 14 09:04:28.041: INFO: Created: latency-svc-8t8h2 Dec 14 09:04:28.081: INFO: Got endpoints: latency-svc-lqlb4 [742.195495ms] Dec 14 09:04:28.092: INFO: Created: latency-svc-297w6 Dec 14 09:04:28.132: INFO: Got endpoints: latency-svc-gx86p [750.80693ms] Dec 14 09:04:28.141: INFO: Created: latency-svc-fpmlq Dec 14 09:04:28.181: INFO: Got endpoints: latency-svc-957vd [749.75074ms] Dec 14 09:04:28.196: INFO: Created: latency-svc-f4g2l Dec 14 09:04:28.231: INFO: Got endpoints: latency-svc-vl5qz [749.050549ms] Dec 14 09:04:28.242: INFO: Created: latency-svc-cq8wn Dec 14 09:04:28.281: INFO: Got endpoints: latency-svc-mlmdp [749.621842ms] Dec 14 09:04:28.292: INFO: Created: latency-svc-28cts Dec 14 09:04:28.331: INFO: Got endpoints: latency-svc-qn8kt [748.084716ms] Dec 14 09:04:28.342: INFO: Created: latency-svc-7rwqr Dec 14 09:04:28.380: INFO: Got endpoints: latency-svc-qg6vv [750.339062ms] Dec 14 09:04:28.391: INFO: Created: latency-svc-gc4q6 Dec 14 09:04:28.432: INFO: Got endpoints: latency-svc-thvkf [751.711865ms] Dec 14 09:04:28.443: INFO: Created: latency-svc-mb9kg Dec 14 09:04:28.480: INFO: Got endpoints: latency-svc-v754n [749.660025ms] Dec 14 09:04:28.491: INFO: Created: latency-svc-zjnkl Dec 14 09:04:28.533: INFO: Got endpoints: latency-svc-8xrth [752.462299ms] Dec 14 09:04:28.545: INFO: Created: latency-svc-6nch9 Dec 14 09:04:28.582: INFO: Got endpoints: latency-svc-rx4pc [752.52816ms] Dec 14 09:04:28.594: INFO: Created: latency-svc-l5mzf Dec 14 09:04:28.631: INFO: Got endpoints: latency-svc-mwrzj [750.251502ms] Dec 14 09:04:28.641: INFO: Created: latency-svc-dztcd Dec 14 09:04:28.681: INFO: Got endpoints: latency-svc-xdmxg [750.819892ms] Dec 14 09:04:28.693: INFO: Created: latency-svc-mnrdd Dec 14 09:04:28.731: INFO: Got endpoints: latency-svc-hvv59 [751.095527ms] Dec 14 09:04:28.742: INFO: Created: latency-svc-qkhsd Dec 14 09:04:28.781: INFO: Got endpoints: latency-svc-8t8h2 [750.313233ms] Dec 14 09:04:28.792: INFO: Created: latency-svc-nbvvq Dec 14 09:04:28.835: INFO: Got endpoints: latency-svc-297w6 [753.882103ms] Dec 14 09:04:28.847: INFO: Created: latency-svc-594zh Dec 14 09:04:28.883: INFO: Got endpoints: latency-svc-fpmlq [751.152768ms] Dec 14 09:04:28.895: INFO: Created: latency-svc-54kwb Dec 14 09:04:28.931: INFO: Got endpoints: latency-svc-f4g2l [750.157965ms] Dec 14 09:04:28.950: INFO: Created: latency-svc-4tk2x Dec 14 09:04:28.981: INFO: Got endpoints: latency-svc-cq8wn [750.454876ms] Dec 14 09:04:28.999: INFO: Created: latency-svc-8smh5 Dec 14 09:04:29.031: INFO: Got endpoints: latency-svc-28cts [749.729649ms] Dec 14 09:04:29.042: INFO: Created: latency-svc-fzhhb Dec 14 09:04:29.082: INFO: Got endpoints: latency-svc-7rwqr [751.667972ms] Dec 14 09:04:29.093: INFO: Created: latency-svc-pzv6p Dec 14 09:04:29.132: INFO: Got endpoints: latency-svc-gc4q6 [751.86845ms] Dec 14 09:04:29.144: INFO: Created: latency-svc-gh7n8 Dec 14 09:04:29.181: INFO: Got endpoints: latency-svc-mb9kg [748.678022ms] Dec 14 09:04:29.194: INFO: Created: latency-svc-xvgl5 Dec 14 09:04:29.232: INFO: Got endpoints: latency-svc-zjnkl [751.256937ms] Dec 14 09:04:29.241: INFO: Created: latency-svc-ghl48 Dec 14 09:04:29.281: INFO: Got endpoints: latency-svc-6nch9 [748.276014ms] Dec 14 09:04:29.293: INFO: Created: latency-svc-95v5d Dec 14 09:04:29.332: INFO: Got endpoints: latency-svc-l5mzf [750.006506ms] Dec 14 09:04:29.344: INFO: Created: latency-svc-58sl2 Dec 14 09:04:29.380: INFO: Got endpoints: latency-svc-dztcd [748.575837ms] Dec 14 09:04:29.391: INFO: Created: latency-svc-tws9x Dec 14 09:04:29.433: INFO: Got endpoints: latency-svc-mnrdd [751.418936ms] Dec 14 09:04:29.445: INFO: Created: latency-svc-nbn2r Dec 14 09:04:29.482: INFO: Got endpoints: latency-svc-qkhsd [750.318614ms] Dec 14 09:04:29.494: INFO: Created: latency-svc-cr6gp Dec 14 09:04:29.532: INFO: Got endpoints: latency-svc-nbvvq [750.575906ms] Dec 14 09:04:29.544: INFO: Created: latency-svc-mtvv6 Dec 14 09:04:29.582: INFO: Got endpoints: latency-svc-594zh [746.413815ms] Dec 14 09:04:29.593: INFO: Created: latency-svc-7qlk6 Dec 14 09:04:29.632: INFO: Got endpoints: latency-svc-54kwb [748.291897ms] Dec 14 09:04:29.646: INFO: Created: latency-svc-7tjzv Dec 14 09:04:29.732: INFO: Got endpoints: latency-svc-4tk2x [801.152636ms] Dec 14 09:04:29.743: INFO: Created: latency-svc-qg8f2 Dec 14 09:04:29.782: INFO: Got endpoints: latency-svc-8smh5 [800.950731ms] Dec 14 09:04:29.795: INFO: Created: latency-svc-64t7k Dec 14 09:04:29.834: INFO: Got endpoints: latency-svc-fzhhb [802.419276ms] Dec 14 09:04:29.846: INFO: Created: latency-svc-wsbl6 Dec 14 09:04:29.882: INFO: Got endpoints: latency-svc-pzv6p [799.38902ms] Dec 14 09:04:29.894: INFO: Created: latency-svc-kcphm Dec 14 09:04:29.934: INFO: Got endpoints: latency-svc-gh7n8 [801.317147ms] Dec 14 09:04:29.950: INFO: Created: latency-svc-cv84z Dec 14 09:04:29.981: INFO: Got endpoints: latency-svc-xvgl5 [800.207182ms] Dec 14 09:04:29.992: INFO: Created: latency-svc-5dhpt Dec 14 09:04:30.034: INFO: Got endpoints: latency-svc-ghl48 [802.080337ms] Dec 14 09:04:30.044: INFO: Created: latency-svc-r9q2g Dec 14 09:04:30.081: INFO: Got endpoints: latency-svc-95v5d [799.877188ms] Dec 14 09:04:30.092: INFO: Created: latency-svc-v8ps2 Dec 14 09:04:30.130: INFO: Got endpoints: latency-svc-58sl2 [797.55237ms] Dec 14 09:04:30.140: INFO: Created: latency-svc-m7dbx Dec 14 09:04:30.182: INFO: Got endpoints: latency-svc-tws9x [801.646176ms] Dec 14 09:04:30.193: INFO: Created: latency-svc-9spf6 Dec 14 09:04:30.232: INFO: Got endpoints: latency-svc-nbn2r [798.889039ms] Dec 14 09:04:30.243: INFO: Created: latency-svc-h2lkj Dec 14 09:04:30.281: INFO: Got endpoints: latency-svc-cr6gp [798.913782ms] Dec 14 09:04:30.294: INFO: Created: latency-svc-45f7r Dec 14 09:04:30.330: INFO: Got endpoints: latency-svc-mtvv6 [798.354592ms] Dec 14 09:04:30.344: INFO: Created: latency-svc-5txk4 Dec 14 09:04:30.383: INFO: Got endpoints: latency-svc-7qlk6 [800.882718ms] Dec 14 09:04:30.411: INFO: Created: latency-svc-d2bcq Dec 14 09:04:30.433: INFO: Got endpoints: latency-svc-7tjzv [801.088713ms] Dec 14 09:04:30.443: INFO: Created: latency-svc-wmwtx Dec 14 09:04:30.481: INFO: Got endpoints: latency-svc-qg8f2 [748.736141ms] Dec 14 09:04:30.490: INFO: Created: latency-svc-z76xh Dec 14 09:04:30.531: INFO: Got endpoints: latency-svc-64t7k [749.049102ms] Dec 14 09:04:30.544: INFO: Created: latency-svc-5j2fr Dec 14 09:04:30.581: INFO: Got endpoints: latency-svc-wsbl6 [747.18812ms] Dec 14 09:04:30.592: INFO: Created: latency-svc-zxnl8 Dec 14 09:04:30.631: INFO: Got endpoints: latency-svc-kcphm [749.126651ms] Dec 14 09:04:30.642: INFO: Created: latency-svc-zcnxh Dec 14 09:04:30.681: INFO: Got endpoints: latency-svc-cv84z [746.637269ms] Dec 14 09:04:30.692: INFO: Created: latency-svc-9ds5p Dec 14 09:04:30.731: INFO: Got endpoints: latency-svc-5dhpt [749.479383ms] Dec 14 09:04:30.742: INFO: Created: latency-svc-zqmsn Dec 14 09:04:30.780: INFO: Got endpoints: latency-svc-r9q2g [746.341505ms] Dec 14 09:04:30.792: INFO: Created: latency-svc-lsqjp Dec 14 09:04:30.836: INFO: Got endpoints: latency-svc-v8ps2 [754.247314ms] Dec 14 09:04:30.847: INFO: Created: latency-svc-7j8ml Dec 14 09:04:30.882: INFO: Got endpoints: latency-svc-m7dbx [751.79408ms] Dec 14 09:04:30.896: INFO: Created: latency-svc-bz7g4 Dec 14 09:04:30.932: INFO: Got endpoints: latency-svc-9spf6 [750.237075ms] Dec 14 09:04:30.949: INFO: Created: latency-svc-tbqq5 Dec 14 09:04:30.982: INFO: Got endpoints: latency-svc-h2lkj [749.792957ms] Dec 14 09:04:30.994: INFO: Created: latency-svc-xs8hd Dec 14 09:04:31.032: INFO: Got endpoints: latency-svc-45f7r [751.53075ms] Dec 14 09:04:31.044: INFO: Created: latency-svc-psh5l Dec 14 09:04:31.082: INFO: Got endpoints: latency-svc-5txk4 [751.40751ms] Dec 14 09:04:31.096: INFO: Created: latency-svc-5xqp7 Dec 14 09:04:31.130: INFO: Got endpoints: latency-svc-d2bcq [747.383226ms] Dec 14 09:04:31.140: INFO: Created: latency-svc-lkg9t Dec 14 09:04:31.182: INFO: Got endpoints: latency-svc-wmwtx [748.731182ms] Dec 14 09:04:31.191: INFO: Created: latency-svc-n9kqt Dec 14 09:04:31.231: INFO: Got endpoints: latency-svc-z76xh [749.673362ms] Dec 14 09:04:31.242: INFO: Created: latency-svc-qs6tf Dec 14 09:04:31.281: INFO: Got endpoints: latency-svc-5j2fr [749.65548ms] Dec 14 09:04:31.292: INFO: Created: latency-svc-h28s8 Dec 14 09:04:31.331: INFO: Got endpoints: latency-svc-zxnl8 [750.088162ms] Dec 14 09:04:31.341: INFO: Created: latency-svc-q7cp5 Dec 14 09:04:31.432: INFO: Got endpoints: latency-svc-zcnxh [800.342613ms] Dec 14 09:04:31.443: INFO: Created: latency-svc-8f96b Dec 14 09:04:31.482: INFO: Got endpoints: latency-svc-9ds5p [801.308838ms] Dec 14 09:04:31.493: INFO: Created: latency-svc-4sgvl Dec 14 09:04:31.531: INFO: Got endpoints: latency-svc-zqmsn [800.449276ms] Dec 14 09:04:31.540: INFO: Created: latency-svc-hpczk Dec 14 09:04:31.582: INFO: Got endpoints: latency-svc-lsqjp [801.103247ms] Dec 14 09:04:31.594: INFO: Created: latency-svc-gthch Dec 14 09:04:31.632: INFO: Got endpoints: latency-svc-7j8ml [796.125534ms] Dec 14 09:04:31.643: INFO: Created: latency-svc-5jtxc Dec 14 09:04:31.682: INFO: Got endpoints: latency-svc-bz7g4 [800.430987ms] Dec 14 09:04:31.694: INFO: Created: latency-svc-ljwns Dec 14 09:04:31.732: INFO: Got endpoints: latency-svc-tbqq5 [799.726491ms] Dec 14 09:04:31.744: INFO: Created: latency-svc-b4d2h Dec 14 09:04:31.782: INFO: Got endpoints: latency-svc-xs8hd [799.743474ms] Dec 14 09:04:31.792: INFO: Created: latency-svc-6gldz Dec 14 09:04:31.830: INFO: Got endpoints: latency-svc-psh5l [797.881886ms] Dec 14 09:04:31.840: INFO: Created: latency-svc-svt5q Dec 14 09:04:31.882: INFO: Got endpoints: latency-svc-5xqp7 [799.930212ms] Dec 14 09:04:31.894: INFO: Created: latency-svc-gd685 Dec 14 09:04:31.932: INFO: Got endpoints: latency-svc-lkg9t [801.851022ms] Dec 14 09:04:31.944: INFO: Created: latency-svc-lxc59 Dec 14 09:04:31.983: INFO: Got endpoints: latency-svc-n9kqt [800.990146ms] Dec 14 09:04:31.992: INFO: Created: latency-svc-6xlrc Dec 14 09:04:32.032: INFO: Got endpoints: latency-svc-qs6tf [800.87455ms] Dec 14 09:04:32.044: INFO: Created: latency-svc-zb47z Dec 14 09:04:32.082: INFO: Got endpoints: latency-svc-h28s8 [800.547631ms] Dec 14 09:04:32.093: INFO: Created: latency-svc-6ws84 Dec 14 09:04:32.132: INFO: Got endpoints: latency-svc-q7cp5 [800.726973ms] Dec 14 09:04:32.143: INFO: Created: latency-svc-zw9ll Dec 14 09:04:32.182: INFO: Got endpoints: latency-svc-8f96b [750.593418ms] Dec 14 09:04:32.197: INFO: Created: latency-svc-xjcc5 Dec 14 09:04:32.236: INFO: Got endpoints: latency-svc-4sgvl [753.650491ms] Dec 14 09:04:32.247: INFO: Created: latency-svc-jgqpp Dec 14 09:04:32.281: INFO: Got endpoints: latency-svc-hpczk [750.047716ms] Dec 14 09:04:32.295: INFO: Created: latency-svc-cgr7k Dec 14 09:04:32.331: INFO: Got endpoints: latency-svc-gthch [749.29814ms] Dec 14 09:04:32.343: INFO: Created: latency-svc-6t5xt Dec 14 09:04:32.382: INFO: Got endpoints: latency-svc-5jtxc [750.359992ms] Dec 14 09:04:32.392: INFO: Created: latency-svc-h9hpd Dec 14 09:04:32.432: INFO: Got endpoints: latency-svc-ljwns [749.639988ms] Dec 14 09:04:32.444: INFO: Created: latency-svc-gcjhl Dec 14 09:04:32.482: INFO: Got endpoints: latency-svc-b4d2h [750.569781ms] Dec 14 09:04:32.495: INFO: Created: latency-svc-q6q5k Dec 14 09:04:32.532: INFO: Got endpoints: latency-svc-6gldz [749.930401ms] Dec 14 09:04:32.543: INFO: Created: latency-svc-f7t62 Dec 14 09:04:32.583: INFO: Got endpoints: latency-svc-svt5q [752.212355ms] Dec 14 09:04:32.594: INFO: Created: latency-svc-swqv7 Dec 14 09:04:32.632: INFO: Got endpoints: latency-svc-gd685 [749.361787ms] Dec 14 09:04:32.644: INFO: Created: latency-svc-qcjll Dec 14 09:04:32.682: INFO: Got endpoints: latency-svc-lxc59 [749.277138ms] Dec 14 09:04:32.695: INFO: Created: latency-svc-tfczd Dec 14 09:04:32.731: INFO: Got endpoints: latency-svc-6xlrc [748.002851ms] Dec 14 09:04:32.744: INFO: Created: latency-svc-ccqk5 Dec 14 09:04:32.782: INFO: Got endpoints: latency-svc-zb47z [749.774464ms] Dec 14 09:04:32.795: INFO: Created: latency-svc-p6rdb Dec 14 09:04:32.832: INFO: Got endpoints: latency-svc-6ws84 [749.775137ms] Dec 14 09:04:32.844: INFO: Created: latency-svc-47s5f Dec 14 09:04:32.881: INFO: Got endpoints: latency-svc-zw9ll [749.361822ms] Dec 14 09:04:32.895: INFO: Created: latency-svc-8ktqj Dec 14 09:04:32.932: INFO: Got endpoints: latency-svc-xjcc5 [749.13234ms] Dec 14 09:04:32.944: INFO: Created: latency-svc-hdf4x Dec 14 09:04:32.982: INFO: Got endpoints: latency-svc-jgqpp [745.649459ms] Dec 14 09:04:32.994: INFO: Created: latency-svc-wwzj5 Dec 14 09:04:33.033: INFO: Got endpoints: latency-svc-cgr7k [751.564726ms] Dec 14 09:04:33.047: INFO: Created: latency-svc-7wqxc Dec 14 09:04:33.081: INFO: Got endpoints: latency-svc-6t5xt [749.431491ms] Dec 14 09:04:33.095: INFO: Created: latency-svc-c7gct Dec 14 09:04:33.131: INFO: Got endpoints: latency-svc-h9hpd [748.445352ms] Dec 14 09:04:33.146: INFO: Created: latency-svc-5287q Dec 14 09:04:33.181: INFO: Got endpoints: latency-svc-gcjhl [748.808392ms] Dec 14 09:04:33.196: INFO: Created: latency-svc-zlxv4 Dec 14 09:04:33.231: INFO: Got endpoints: latency-svc-q6q5k [748.211794ms] Dec 14 09:04:33.241: INFO: Created: latency-svc-rfmmv Dec 14 09:04:33.282: INFO: Got endpoints: latency-svc-f7t62 [750.418359ms] Dec 14 09:04:33.293: INFO: Created: latency-svc-dpsnr Dec 14 09:04:33.331: INFO: Got endpoints: latency-svc-swqv7 [748.378308ms] Dec 14 09:04:33.343: INFO: Created: latency-svc-q9dr5 Dec 14 09:04:33.381: INFO: Got endpoints: latency-svc-qcjll [749.381803ms] Dec 14 09:04:33.392: INFO: Created: latency-svc-g9596 Dec 14 09:04:33.432: INFO: Got endpoints: latency-svc-tfczd [750.43942ms] Dec 14 09:04:33.443: INFO: Created: latency-svc-jpxhp Dec 14 09:04:33.481: INFO: Got endpoints: latency-svc-ccqk5 [750.239768ms] Dec 14 09:04:33.494: INFO: Created: latency-svc-kpknb Dec 14 09:04:33.531: INFO: Got endpoints: latency-svc-p6rdb [749.335187ms] Dec 14 09:04:33.545: INFO: Created: latency-svc-2ll6j Dec 14 09:04:33.581: INFO: Got endpoints: latency-svc-47s5f [748.794633ms] Dec 14 09:04:33.593: INFO: Created: latency-svc-8ggzd Dec 14 09:04:33.633: INFO: Got endpoints: latency-svc-8ktqj [751.325653ms] Dec 14 09:04:33.647: INFO: Created: latency-svc-lf9jn Dec 14 09:04:33.682: INFO: Got endpoints: latency-svc-hdf4x [750.203926ms] Dec 14 09:04:33.695: INFO: Created: latency-svc-rk5wt Dec 14 09:04:33.733: INFO: Got endpoints: latency-svc-wwzj5 [751.590795ms] Dec 14 09:04:33.745: INFO: Created: latency-svc-phkm9 Dec 14 09:04:33.784: INFO: Got endpoints: latency-svc-7wqxc [751.204668ms] Dec 14 09:04:33.796: INFO: Created: latency-svc-8cz96 Dec 14 09:04:33.835: INFO: Got endpoints: latency-svc-c7gct [753.67065ms] Dec 14 09:04:33.849: INFO: Created: latency-svc-jzj58 Dec 14 09:04:33.882: INFO: Got endpoints: latency-svc-5287q [751.519506ms] Dec 14 09:04:33.894: INFO: Created: latency-svc-6r56t Dec 14 09:04:33.930: INFO: Got endpoints: latency-svc-zlxv4 [749.338665ms] Dec 14 09:04:33.942: INFO: Created: latency-svc-6cwbp Dec 14 09:04:33.981: INFO: Got endpoints: latency-svc-rfmmv [750.602136ms] Dec 14 09:04:33.991: INFO: Created: latency-svc-h7sfw Dec 14 09:04:34.031: INFO: Got endpoints: latency-svc-dpsnr [748.370775ms] Dec 14 09:04:34.041: INFO: Created: latency-svc-ptrnk Dec 14 09:04:34.080: INFO: Got endpoints: latency-svc-q9dr5 [748.344542ms] Dec 14 09:04:34.089: INFO: Created: latency-svc-xntsz Dec 14 09:04:34.130: INFO: Got endpoints: latency-svc-g9596 [749.193787ms] Dec 14 09:04:34.140: INFO: Created: latency-svc-cvc4p Dec 14 09:04:34.182: INFO: Got endpoints: latency-svc-jpxhp [749.257541ms] Dec 14 09:04:34.192: INFO: Created: latency-svc-drfng Dec 14 09:04:34.231: INFO: Got endpoints: latency-svc-kpknb [750.030504ms] Dec 14 09:04:34.242: INFO: Created: latency-svc-6mvzm Dec 14 09:04:34.333: INFO: Got endpoints: latency-svc-2ll6j [801.385494ms] Dec 14 09:04:34.382: INFO: Got endpoints: latency-svc-8ggzd [801.266418ms] Dec 14 09:04:34.432: INFO: Got endpoints: latency-svc-lf9jn [799.129666ms] Dec 14 09:04:34.481: INFO: Got endpoints: latency-svc-rk5wt [799.23529ms] Dec 14 09:04:34.531: INFO: Got endpoints: latency-svc-phkm9 [797.729179ms] Dec 14 09:04:34.582: INFO: Got endpoints: latency-svc-8cz96 [796.957336ms] Dec 14 09:04:34.631: INFO: Got endpoints: latency-svc-jzj58 [796.298176ms] Dec 14 09:04:34.682: INFO: Got endpoints: latency-svc-6r56t [799.700147ms] Dec 14 09:04:34.731: INFO: Got endpoints: latency-svc-6cwbp [800.822438ms] Dec 14 09:04:34.782: INFO: Got endpoints: latency-svc-h7sfw [800.244372ms] Dec 14 09:04:34.834: INFO: Got endpoints: latency-svc-ptrnk [803.163526ms] Dec 14 09:04:34.882: INFO: Got endpoints: latency-svc-xntsz [802.159843ms] Dec 14 09:04:34.933: INFO: Got endpoints: latency-svc-cvc4p [802.460237ms] Dec 14 09:04:34.982: INFO: Got endpoints: latency-svc-drfng [800.830269ms] Dec 14 09:04:35.032: INFO: Got endpoints: latency-svc-6mvzm [800.119756ms] Dec 14 09:04:35.032: INFO: Latencies: [14.545973ms 20.02725ms 30.438629ms 35.207347ms 40.412271ms 44.924784ms 53.692735ms 58.645557ms 66.050376ms 72.000565ms 79.2343ms 86.126753ms 94.584109ms 100.26781ms 102.536133ms 103.106253ms 103.629834ms 104.201752ms 108.854581ms 109.861422ms 113.101097ms 113.899001ms 114.059819ms 114.885359ms 115.272641ms 115.564485ms 115.907671ms 116.029045ms 117.525293ms 117.621171ms 117.844629ms 121.743672ms 129.799818ms 139.296074ms 144.903401ms 183.020742ms 228.253664ms 272.422642ms 313.664642ms 354.476161ms 398.79349ms 440.883209ms 483.418394ms 526.012797ms 568.511475ms 612.918715ms 650.951141ms 683.650839ms 725.449719ms 742.195495ms 745.649459ms 746.341505ms 746.413815ms 746.637269ms 746.928147ms 747.18812ms 747.383226ms 748.002851ms 748.084716ms 748.211794ms 748.276014ms 748.291897ms 748.344542ms 748.370775ms 748.378308ms 748.445352ms 748.575837ms 748.678022ms 748.731182ms 748.736141ms 748.754163ms 748.794633ms 748.808392ms 748.908703ms 749.049102ms 749.050549ms 749.126651ms 749.13234ms 749.193787ms 749.242861ms 749.257541ms 749.277138ms 749.29814ms 749.335187ms 749.338665ms 749.361787ms 749.361822ms 749.381803ms 749.431491ms 749.479383ms 749.493127ms 749.596054ms 749.621842ms 749.639988ms 749.650512ms 749.65548ms 749.660025ms 749.673362ms 749.729649ms 749.75074ms 749.774464ms 749.775137ms 749.792957ms 749.849416ms 749.930401ms 750.006506ms 750.022666ms 750.030504ms 750.046346ms 750.047716ms 750.088162ms 750.120861ms 750.157965ms 750.203926ms 750.237075ms 750.239768ms 750.251502ms 750.303795ms 750.313233ms 750.318614ms 750.333745ms 750.339062ms 750.359992ms 750.418359ms 750.43942ms 750.454876ms 750.569781ms 750.575906ms 750.593418ms 750.602136ms 750.80693ms 750.819892ms 750.952561ms 751.095527ms 751.152768ms 751.204668ms 751.256937ms 751.325653ms 751.40751ms 751.418936ms 751.519506ms 751.53075ms 751.564726ms 751.590795ms 751.667972ms 751.711865ms 751.79408ms 751.86845ms 752.212355ms 752.462299ms 752.52816ms 753.650491ms 753.67065ms 753.882103ms 754.247314ms 796.125534ms 796.298176ms 796.957336ms 797.55237ms 797.729179ms 797.881886ms 798.354592ms 798.889039ms 798.913782ms 799.129666ms 799.23529ms 799.38902ms 799.700147ms 799.726491ms 799.743474ms 799.877188ms 799.930212ms 800.119756ms 800.207182ms 800.244372ms 800.342613ms 800.430987ms 800.449276ms 800.547631ms 800.726973ms 800.822438ms 800.830269ms 800.87455ms 800.882718ms 800.950731ms 800.990146ms 801.088713ms 801.103247ms 801.152636ms 801.266418ms 801.308838ms 801.317147ms 801.385494ms 801.646176ms 801.851022ms 802.080337ms 802.159843ms 802.419276ms 802.460237ms 803.163526ms] Dec 14 09:04:35.032: INFO: 50 %ile: 749.774464ms Dec 14 09:04:35.032: INFO: 90 %ile: 800.822438ms Dec 14 09:04:35.032: INFO: 99 %ile: 802.460237ms Dec 14 09:04:35.033: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:35.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-311" for this suite. • [SLOW TEST:11.938 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":10,"skipped":155,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":260,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:30.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:04:30.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7" in namespace "downward-api-1853" to be "Succeeded or Failed" Dec 14 09:04:30.156: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003716ms Dec 14 09:04:32.160: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007010023s Dec 14 09:04:34.164: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010538843s Dec 14 09:04:36.169: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015230708s Dec 14 09:04:38.174: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020814198s STEP: Saw pod success Dec 14 09:04:38.174: INFO: Pod "downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7" satisfied condition "Succeeded or Failed" Dec 14 09:04:38.178: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7 container client-container: STEP: delete the pod Dec 14 09:04:38.194: INFO: Waiting for pod downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7 to disappear Dec 14 09:04:38.198: INFO: Pod downwardapi-volume-2e5e415a-b106-42f0-a237-11f4f6a66dd7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:38.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1853" for this suite. • [SLOW TEST:8.094 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:28.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 14 09:04:28.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13945663 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:04:28.257: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13945664 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:04:28.257: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13945665 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 14 09:04:38.298: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13946576 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:04:38.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13946577 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:04:38.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7804 722ec4cf-8441-4d1d-921e-0fa90c9b26b4 13946578 0 2021-12-14 09:04:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:38.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7804" for this suite. • [SLOW TEST:10.109 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":9,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:38.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Dec 14 09:04:38.461: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:38.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9844" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":10,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:34.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Dec 14 09:04:34.131: INFO: The status of Pod pod-update-b9355c29-ffcc-4d80-a6a3-998c2b368a44 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:36.136: INFO: The status of Pod pod-update-b9355c29-ffcc-4d80-a6a3-998c2b368a44 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:38.137: INFO: The status of Pod pod-update-b9355c29-ffcc-4d80-a6a3-998c2b368a44 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:40.137: INFO: The status of Pod pod-update-b9355c29-ffcc-4d80-a6a3-998c2b368a44 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 14 09:04:40.652: INFO: Successfully updated pod "pod-update-b9355c29-ffcc-4d80-a6a3-998c2b368a44" STEP: verifying the updated pod is in kubernetes Dec 14 09:04:40.658: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:40.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8789" for this suite. • [SLOW TEST:6.573 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:21.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-rh5r STEP: Creating a pod to test atomic-volume-subpath Dec 14 09:04:21.816: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rh5r" in namespace "subpath-7549" to be "Succeeded or Failed" Dec 14 09:04:21.820: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.679866ms Dec 14 09:04:23.825: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 2.009088618s Dec 14 09:04:25.835: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 4.018771109s Dec 14 09:04:27.839: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 6.022726058s Dec 14 09:04:29.844: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 8.027581437s Dec 14 09:04:31.849: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 10.03239106s Dec 14 09:04:33.852: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 12.035941784s Dec 14 09:04:35.856: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 14.040201954s Dec 14 09:04:37.861: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 16.044403921s Dec 14 09:04:39.868: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 18.051962356s Dec 14 09:04:41.873: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Running", Reason="", readiness=true. Elapsed: 20.056729774s Dec 14 09:04:43.878: INFO: Pod "pod-subpath-test-secret-rh5r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.061506196s STEP: Saw pod success Dec 14 09:04:43.878: INFO: Pod "pod-subpath-test-secret-rh5r" satisfied condition "Succeeded or Failed" Dec 14 09:04:43.881: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-subpath-test-secret-rh5r container test-container-subpath-secret-rh5r: STEP: delete the pod Dec 14 09:04:43.898: INFO: Waiting for pod pod-subpath-test-secret-rh5r to disappear Dec 14 09:04:43.901: INFO: Pod pod-subpath-test-secret-rh5r no longer exists STEP: Deleting pod pod-subpath-test-secret-rh5r Dec 14 09:04:43.901: INFO: Deleting pod "pod-subpath-test-secret-rh5r" in namespace "subpath-7549" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:43.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7549" for this suite. • [SLOW TEST:22.143 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:38.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:44.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6686" for this suite. • [SLOW TEST:6.088 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":11,"skipped":264,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:38.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:38.477: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:44.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3014" for this suite. • [SLOW TEST:6.234 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:40.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-94ffcc94-9cf6-49c2-82f5-3f0a05c9bb48 STEP: Creating secret with name s-test-opt-upd-92e2bf5c-f3e5-4e86-afae-159a0d0abe7b STEP: Creating the pod Dec 14 09:04:40.764: INFO: The status of Pod pod-secrets-ddbbf663-3432-415e-9726-2523bee43ac0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:42.768: INFO: The status of Pod pod-secrets-ddbbf663-3432-415e-9726-2523bee43ac0 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-94ffcc94-9cf6-49c2-82f5-3f0a05c9bb48 STEP: Updating secret s-test-opt-upd-92e2bf5c-f3e5-4e86-afae-159a0d0abe7b STEP: Creating secret with name s-test-opt-create-0d085509-5d58-4e31-9d8b-539bcfd91bea STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:44.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-454" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:30.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pdb that targets all three pods in a test replica set STEP: Waiting for the pdb to be processed STEP: First trying to evict a pod which shouldn't be evictable STEP: Waiting for all pods to be running Dec 14 09:04:32.790: INFO: pods: 0 < 3 Dec 14 09:04:34.796: INFO: running pods: 0 < 3 Dec 14 09:04:36.796: INFO: running pods: 0 < 3 Dec 14 09:04:38.797: INFO: running pods: 0 < 3 STEP: locating a running pod STEP: Updating the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running STEP: Waiting for the pdb to observed all healthy pods STEP: Patching the pdb to disallow a pod to be evicted STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running STEP: locating a running pod STEP: Deleting the pdb to allow a pod to be evicted STEP: Waiting for the pdb to be deleted STEP: Trying to evict the same pod we tried earlier which should now be evictable STEP: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:44.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3143" for this suite. • [SLOW TEST:14.156 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":9,"skipped":164,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:35.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:04:35.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:04:37.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:04:39.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069475, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:04:42.592: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Dec 14 09:04:43.592: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Dec 14 09:04:44.591: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Dec 14 09:04:45.591: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:45.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7363" for this suite. STEP: Destroying namespace "webhook-7363-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.623 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":11,"skipped":164,"failed":0} SS ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":266,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:43.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 14 09:04:43.948: INFO: Waiting up to 5m0s for pod "pod-fdf8f704-3036-45a5-87c0-766f2defffd8" in namespace "emptydir-4403" to be "Succeeded or Failed" Dec 14 09:04:43.950: INFO: Pod "pod-fdf8f704-3036-45a5-87c0-766f2defffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236724ms Dec 14 09:04:45.954: INFO: Pod "pod-fdf8f704-3036-45a5-87c0-766f2defffd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005964726s STEP: Saw pod success Dec 14 09:04:45.954: INFO: Pod "pod-fdf8f704-3036-45a5-87c0-766f2defffd8" satisfied condition "Succeeded or Failed" Dec 14 09:04:45.957: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-fdf8f704-3036-45a5-87c0-766f2defffd8 container test-container: STEP: delete the pod Dec 14 09:04:45.972: INFO: Waiting for pod pod-fdf8f704-3036-45a5-87c0-766f2defffd8 to disappear Dec 14 09:04:45.975: INFO: Pod pod-fdf8f704-3036-45a5-87c0-766f2defffd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:45.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4403" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:44.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create a ReplicaSet STEP: Verify that the required pods have come up Dec 14 09:04:44.714: INFO: Pod name sample-pod: Found 0 pods out of 3 Dec 14 09:04:49.718: INFO: Pod name sample-pod: Found 3 pods out of 3 STEP: ensuring each pod is running Dec 14 09:04:51.730: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} STEP: Listing all ReplicaSets STEP: DeleteCollection of the ReplicaSets STEP: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:51.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-925" for this suite. • [SLOW TEST:7.079 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":12,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:51.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Dec 14 09:04:51.897: INFO: created test-event-1 Dec 14 09:04:51.900: INFO: created test-event-2 Dec 14 09:04:51.904: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Dec 14 09:04:51.908: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Dec 14 09:04:51.922: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:51.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1945" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":13,"skipped":308,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":15,"skipped":365,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:44.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-235.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-235.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:04:52.776: INFO: DNS probes using dns-235/dns-test-9c38ed0e-2050-47d8-b86d-812bba0a6a84 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:52.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-235" for this suite. • [SLOW TEST:8.103 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":16,"skipped":365,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:33.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-9128 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:33.518: INFO: Found 0 stateful pods, waiting for 1 Dec 14 09:04:43.522: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: patching the StatefulSet Dec 14 09:04:43.537: INFO: Found 1 stateful pods, waiting for 2 Dec 14 09:04:53.543: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:04:53.543: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Listing all StatefulSets STEP: Delete all of the StatefulSets STEP: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:04:53.561: INFO: Deleting all statefulset in ns statefulset-9128 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:53.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9128" for this suite. • [SLOW TEST:20.106 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":11,"skipped":159,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:45.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Dec 14 09:04:51.754: INFO: 10 pods remaining Dec 14 09:04:51.754: INFO: 10 pods has nil DeletionTimestamp Dec 14 09:04:51.754: INFO: Dec 14 09:04:52.755: INFO: 10 pods remaining Dec 14 09:04:52.755: INFO: 10 pods has nil DeletionTimestamp Dec 14 09:04:52.755: INFO: Dec 14 09:04:53.753: INFO: 0 pods remaining Dec 14 09:04:53.753: INFO: 0 pods has nil DeletionTimestamp Dec 14 09:04:53.753: INFO: STEP: Gathering metrics Dec 14 09:04:54.774: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:04:55.656: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3700" for this suite. • [SLOW TEST:9.967 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":12,"skipped":166,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:25.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-ea602399-b331-42b6-a435-ee7d8a21b006 STEP: Creating the pod Dec 14 09:03:25.944: INFO: The status of Pod pod-configmaps-a75a70bb-5211-42d7-8644-742bf9411864 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:27.947: INFO: The status of Pod pod-configmaps-a75a70bb-5211-42d7-8644-742bf9411864 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:03:29.949: INFO: The status of Pod pod-configmaps-a75a70bb-5211-42d7-8644-742bf9411864 is Running (Ready = true) STEP: Updating configmap configmap-test-upd-ea602399-b331-42b6-a435-ee7d8a21b006 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:58.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9302" for this suite. • [SLOW TEST:92.527 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:44.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:52.945: INFO: Deleting pod "var-expansion-2f3c2ebd-e562-4b01-8885-632e252fe0ba" in namespace "var-expansion-5691" Dec 14 09:04:52.950: INFO: Wait up to 5m0s for pod "var-expansion-2f3c2ebd-e562-4b01-8885-632e252fe0ba" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:04:58.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5691" for this suite. • [SLOW TEST:14.060 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":10,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:53.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-1b3b3a4b-ddea-4a03-941b-1bd66f633947 STEP: Creating a pod to test consume secrets Dec 14 09:04:53.643: INFO: Waiting up to 5m0s for pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f" in namespace "secrets-4717" to be "Succeeded or Failed" Dec 14 09:04:53.646: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568329ms Dec 14 09:04:55.650: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007157084s Dec 14 09:04:57.656: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012338624s Dec 14 09:04:59.661: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018115411s Dec 14 09:05:01.666: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022443878s STEP: Saw pod success Dec 14 09:05:01.666: INFO: Pod "pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f" satisfied condition "Succeeded or Failed" Dec 14 09:05:01.670: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f container secret-volume-test: STEP: delete the pod Dec 14 09:05:01.686: INFO: Waiting for pod pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f to disappear Dec 14 09:05:01.691: INFO: Pod pod-secrets-813ee72b-f3ef-49e5-9064-5672d5009d0f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:01.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4717" for this suite. STEP: Destroying namespace "secret-namespace-1746" for this suite. • [SLOW TEST:8.123 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":160,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:58.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Dec 14 09:04:58.456: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:03.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1127" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:52.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Dec 14 09:04:52.859: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Dec 14 09:04:52.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:53.160: INFO: stderr: "" Dec 14 09:04:53.160: INFO: stdout: "service/agnhost-replica created\n" Dec 14 09:04:53.160: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Dec 14 09:04:53.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:53.381: INFO: stderr: "" Dec 14 09:04:53.381: INFO: stdout: "service/agnhost-primary created\n" Dec 14 09:04:53.381: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 14 09:04:53.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:53.596: INFO: stderr: "" Dec 14 09:04:53.596: INFO: stdout: "service/frontend created\n" Dec 14 09:04:53.596: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Dec 14 09:04:53.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:53.828: INFO: stderr: "" Dec 14 09:04:53.828: INFO: stdout: "deployment.apps/frontend created\n" Dec 14 09:04:53.829: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 14 09:04:53.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:54.048: INFO: stderr: "" Dec 14 09:04:54.049: INFO: stdout: "deployment.apps/agnhost-primary created\n" Dec 14 09:04:54.049: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 14 09:04:54.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 create -f -' Dec 14 09:04:54.257: INFO: stderr: "" Dec 14 09:04:54.257: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Dec 14 09:04:54.257: INFO: Waiting for all frontend pods to be Running. Dec 14 09:05:04.311: INFO: Waiting for frontend to serve content. Dec 14 09:05:04.322: INFO: Trying to add a new entry to the guestbook. Dec 14 09:05:04.338: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Dec 14 09:05:04.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:04.463: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:04.463: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Dec 14 09:05:04.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:04.583: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:04.583: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Dec 14 09:05:04.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:04.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:04.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 14 09:05:04.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:04.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:04.808: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 14 09:05:04.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:04.921: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:04.921: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Dec 14 09:05:04.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1018 delete --grace-period=0 --force -f -' Dec 14 09:05:05.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:05.026: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:05.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1018" for this suite. • [SLOW TEST:12.209 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":17,"skipped":377,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:03.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:05:03.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567" in namespace "downward-api-7923" to be "Succeeded or Failed" Dec 14 09:05:03.245: INFO: Pod "downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955572ms Dec 14 09:05:05.250: INFO: Pod "downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008020103s STEP: Saw pod success Dec 14 09:05:05.250: INFO: Pod "downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567" satisfied condition "Succeeded or Failed" Dec 14 09:05:05.253: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567 container client-container: STEP: delete the pod Dec 14 09:05:05.270: INFO: Waiting for pod downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567 to disappear Dec 14 09:05:05.273: INFO: Pod downwardapi-volume-46dffa7b-8e1f-4124-8064-8cebfe811567 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:05.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7923" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:55.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 14 09:05:01.752: INFO: &Pod{ObjectMeta:{send-events-980fe0e1-3133-44db-94c7-6f1af7d9390e events-9200 284f2a5c-299b-4013-97fb-1ce26e067ce6 13948393 0 2021-12-14 09:04:55 +0000 UTC map[name:foo time:733793118] map[] [] [] [{e2e.test Update v1 2021-12-14 09:04:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:05:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gw8s9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw8s9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:04:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.245,StartTime:2021-12-14 09:04:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:04:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://cb9c0ceac1c2fb3750d27180e408e3fdb8500d089bae01fc673ea91a25b6b35c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Dec 14 09:05:03.759: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 14 09:05:05.763: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:05.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9200" for this suite. • [SLOW TEST:10.084 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":13,"skipped":177,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:46.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Dec 14 09:04:46.098: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:48.103: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:50.103: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:52.102: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:54.103: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Dec 14 09:04:54.115: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:56.120: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:04:58.120: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:00.121: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 14 09:05:00.137: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 14 09:05:00.141: INFO: Pod pod-with-poststart-exec-hook still exists Dec 14 09:05:02.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 14 09:05:02.146: INFO: Pod pod-with-poststart-exec-hook still exists Dec 14 09:05:04.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 14 09:05:04.145: INFO: Pod pod-with-poststart-exec-hook still exists Dec 14 09:05:06.143: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 14 09:05:06.147: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:06.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5437" for this suite. • [SLOW TEST:20.102 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":293,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:01.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 14 09:05:07.807: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:07.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1261" for this suite. • [SLOW TEST:6.095 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":172,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:51.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-6846 STEP: creating replication controller nodeport-test in namespace services-6846 I1214 09:04:52.018700 16 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6846, replica count: 2 I1214 09:04:55.069796 16 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:04:58.070861 16 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:01.072168 16 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:05:01.072: INFO: Creating new exec pod Dec 14 09:05:08.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6846 exec execpod9gws4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Dec 14 09:05:08.352: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Dec 14 09:05:08.352: INFO: stdout: "nodeport-test-9v5jz" Dec 14 09:05:08.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6846 exec execpod9gws4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.186.95 80' Dec 14 09:05:08.578: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.186.95 80\nConnection to 10.130.186.95 80 port [tcp/http] succeeded!\n" Dec 14 09:05:08.578: INFO: stdout: "nodeport-test-9v5jz" Dec 14 09:05:08.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6846 exec execpod9gws4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.10 31540' Dec 14 09:05:08.837: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.10 31540\nConnection to 172.25.0.10 31540 port [tcp/*] succeeded!\n" Dec 14 09:05:08.837: INFO: stdout: "nodeport-test-9v5jz" Dec 14 09:05:08.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6846 exec execpod9gws4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.9 31540' Dec 14 09:05:09.101: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.9 31540\nConnection to 172.25.0.9 31540 port [tcp/*] succeeded!\n" Dec 14 09:05:09.101: INFO: stdout: "nodeport-test-sqq9r" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:09.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6846" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:17.150 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":14,"skipped":317,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:09.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:09.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5452" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":15,"skipped":330,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:06.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e5b6c333-fbf4-43da-977e-86b8caa0b18d STEP: Creating a pod to test consume secrets Dec 14 09:05:06.221: INFO: Waiting up to 5m0s for pod "pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d" in namespace "secrets-4988" to be "Succeeded or Failed" Dec 14 09:05:06.224: INFO: Pod "pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.214679ms Dec 14 09:05:08.227: INFO: Pod "pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d": Phase="Running", Reason="", readiness=true. Elapsed: 2.006502087s Dec 14 09:05:10.232: INFO: Pod "pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010621927s STEP: Saw pod success Dec 14 09:05:10.232: INFO: Pod "pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d" satisfied condition "Succeeded or Failed" Dec 14 09:05:10.234: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d container secret-env-test: STEP: delete the pod Dec 14 09:05:10.251: INFO: Waiting for pod pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d to disappear Dec 14 09:05:10.254: INFO: Pod pod-secrets-48f08848-6272-4f1d-a554-71ad5d35a02d no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:10.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4988" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":296,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:05.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Dec 14 09:05:05.840: INFO: The status of Pod labelsupdatea863b4e9-5b43-49a3-abfe-1c1d951c0d77 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:07.845: INFO: The status of Pod labelsupdatea863b4e9-5b43-49a3-abfe-1c1d951c0d77 is Running (Ready = true) Dec 14 09:05:08.365: INFO: Successfully updated pod "labelsupdatea863b4e9-5b43-49a3-abfe-1c1d951c0d77" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:10.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2218" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":184,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:10.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Dec 14 09:05:11.482: INFO: The status of Pod kube-controller-manager-capi-v1.22-control-plane-jzh89 is Running (Ready = true) Dec 14 09:05:12.355: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:12.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5083" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":15,"skipped":191,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-795aade0-3730-4810-8090-de81366e0c8a STEP: Creating a pod to test consume secrets Dec 14 09:05:05.357: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709" in namespace "projected-3820" to be "Succeeded or Failed" Dec 14 09:05:05.360: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231601ms Dec 14 09:05:07.364: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007733305s Dec 14 09:05:09.369: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011890087s Dec 14 09:05:11.372: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015573647s Dec 14 09:05:13.377: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020377858s STEP: Saw pod success Dec 14 09:05:13.377: INFO: Pod "pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709" satisfied condition "Succeeded or Failed" Dec 14 09:05:13.381: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709 container projected-secret-volume-test: STEP: delete the pod Dec 14 09:05:13.398: INFO: Waiting for pod pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709 to disappear Dec 14 09:05:13.403: INFO: Pod pod-projected-secrets-13bdeb13-017b-4970-b2c4-49ddf9286709 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:13.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3820" for this suite. • [SLOW TEST:8.103 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:09.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Dec 14 09:05:09.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 create -f -' Dec 14 09:05:09.550: INFO: stderr: "" Dec 14 09:05:09.550: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 14 09:05:09.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:05:09.670: INFO: stderr: "" Dec 14 09:05:09.670: INFO: stdout: "update-demo-nautilus-866mf update-demo-nautilus-sxg2p " Dec 14 09:05:09.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods update-demo-nautilus-866mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:05:09.775: INFO: stderr: "" Dec 14 09:05:09.775: INFO: stdout: "" Dec 14 09:05:09.776: INFO: update-demo-nautilus-866mf is created but not running Dec 14 09:05:14.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Dec 14 09:05:14.894: INFO: stderr: "" Dec 14 09:05:14.895: INFO: stdout: "update-demo-nautilus-866mf update-demo-nautilus-sxg2p " Dec 14 09:05:14.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods update-demo-nautilus-866mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:05:15.004: INFO: stderr: "" Dec 14 09:05:15.004: INFO: stdout: "true" Dec 14 09:05:15.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods update-demo-nautilus-866mf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:05:15.112: INFO: stderr: "" Dec 14 09:05:15.112: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:05:15.112: INFO: validating pod update-demo-nautilus-866mf Dec 14 09:05:15.117: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:05:15.117: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:05:15.117: INFO: update-demo-nautilus-866mf is verified up and running Dec 14 09:05:15.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods update-demo-nautilus-sxg2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Dec 14 09:05:15.219: INFO: stderr: "" Dec 14 09:05:15.219: INFO: stdout: "true" Dec 14 09:05:15.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods update-demo-nautilus-sxg2p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Dec 14 09:05:15.320: INFO: stderr: "" Dec 14 09:05:15.320: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Dec 14 09:05:15.320: INFO: validating pod update-demo-nautilus-sxg2p Dec 14 09:05:15.324: INFO: got data: { "image": "nautilus.jpg" } Dec 14 09:05:15.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 14 09:05:15.324: INFO: update-demo-nautilus-sxg2p is verified up and running STEP: using delete to clean up resources Dec 14 09:05:15.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 delete --grace-period=0 --force -f -' Dec 14 09:05:15.426: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 14 09:05:15.426: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 14 09:05:15.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get rc,svc -l name=update-demo --no-headers' Dec 14 09:05:15.539: INFO: stderr: "No resources found in kubectl-7581 namespace.\n" Dec 14 09:05:15.539: INFO: stdout: "" Dec 14 09:05:15.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7581 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 14 09:05:15.647: INFO: stderr: "" Dec 14 09:05:15.647: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:15.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7581" for this suite. • [SLOW TEST:6.430 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":16,"skipped":337,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:05.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:16.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8828" for this suite. • [SLOW TEST:11.081 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":18,"skipped":386,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:16.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Dec 14 09:05:16.221: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Dec 14 09:05:16.226: INFO: starting watch STEP: patching STEP: updating Dec 14 09:05:16.237: INFO: waiting for watch events with expected annotations Dec 14 09:05:16.237: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:16.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7446" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":19,"skipped":398,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:44.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-hzwp STEP: Creating a pod to test atomic-volume-subpath Dec 14 09:04:44.940: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzwp" in namespace "subpath-8291" to be "Succeeded or Failed" Dec 14 09:04:44.944: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100732ms Dec 14 09:04:46.946: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005949407s Dec 14 09:04:48.951: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010429992s Dec 14 09:04:50.956: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 6.015067808s Dec 14 09:04:52.959: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 8.018967406s Dec 14 09:04:54.964: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 10.023534266s Dec 14 09:04:56.968: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 12.027215547s Dec 14 09:04:58.972: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 14.031725706s Dec 14 09:05:00.976: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 16.03577248s Dec 14 09:05:02.981: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 18.040373089s Dec 14 09:05:04.986: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 20.045918s Dec 14 09:05:06.991: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 22.050215787s Dec 14 09:05:08.996: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 24.055563582s Dec 14 09:05:11.000: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 26.059719526s Dec 14 09:05:13.008: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 28.067458518s Dec 14 09:05:15.013: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Running", Reason="", readiness=true. Elapsed: 30.072315665s Dec 14 09:05:17.017: INFO: Pod "pod-subpath-test-configmap-hzwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.076889228s STEP: Saw pod success Dec 14 09:05:17.017: INFO: Pod "pod-subpath-test-configmap-hzwp" satisfied condition "Succeeded or Failed" Dec 14 09:05:17.021: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-subpath-test-configmap-hzwp container test-container-subpath-configmap-hzwp: STEP: delete the pod Dec 14 09:05:17.035: INFO: Waiting for pod pod-subpath-test-configmap-hzwp to disappear Dec 14 09:05:17.038: INFO: Pod pod-subpath-test-configmap-hzwp no longer exists STEP: Deleting pod pod-subpath-test-configmap-hzwp Dec 14 09:05:17.038: INFO: Deleting pod "pod-subpath-test-configmap-hzwp" in namespace "subpath-8291" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:17.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8291" for this suite. • [SLOW TEST:32.147 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":256,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:15.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Dec 14 09:05:15.714: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-4317 b63f54ca-aad7-4903-9919-9c687a53f124 13949048 0 2021-12-14 09:05:15 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-12-14 09:05:15 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sn2rj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sn2rj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:05:15.718: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:17.723: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:19.723: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Dec 14 09:05:19.724: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4317 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:05:19.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Dec 14 09:05:19.917: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4317 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:05:19.917: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:05:20.077: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:20.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4317" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":17,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:20.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:20.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2868" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":18,"skipped":398,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:13.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:05:13.508: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201" in namespace "security-context-test-3754" to be "Succeeded or Failed" Dec 14 09:05:13.511: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190548ms Dec 14 09:05:15.515: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006587172s Dec 14 09:05:17.519: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011212126s Dec 14 09:05:19.525: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017096827s Dec 14 09:05:21.530: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02155754s Dec 14 09:05:21.530: INFO: Pod "alpine-nnp-false-dcdeddc5-8ae5-40cf-a6ce-08025ad14201" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:21.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3754" for this suite. • [SLOW TEST:8.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:10.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Dec 14 09:05:10.319: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:12.325: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:14.323: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:16.323: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:18.323: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:20.323: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 14 09:05:21.341: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:22.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3177" for this suite. • [SLOW TEST:12.089 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:07.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-76941836-9e8f-484b-8e39-8bb6095c5da4 STEP: Creating a pod to test consume configMaps Dec 14 09:05:07.888: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5" in namespace "projected-5184" to be "Succeeded or Failed" Dec 14 09:05:07.891: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.956274ms Dec 14 09:05:09.896: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007600482s Dec 14 09:05:11.901: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012657286s Dec 14 09:05:13.906: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017595009s Dec 14 09:05:15.911: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022682421s Dec 14 09:05:17.916: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.028007136s Dec 14 09:05:19.921: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.032651269s Dec 14 09:05:21.926: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.037577351s Dec 14 09:05:23.931: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.04327141s STEP: Saw pod success Dec 14 09:05:23.932: INFO: Pod "pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5" satisfied condition "Succeeded or Failed" Dec 14 09:05:23.935: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5 container agnhost-container: STEP: delete the pod Dec 14 09:05:23.952: INFO: Waiting for pod pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5 to disappear Dec 14 09:05:23.955: INFO: Pod pod-projected-configmaps-0375f0e4-10c7-4d47-a63c-538c4f3bcce5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:23.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5184" for this suite. • [SLOW TEST:16.125 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":173,"failed":0} SSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":19,"skipped":297,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:22.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:05:22.401: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Dec 14 09:05:22.414: INFO: The status of Pod pod-exec-websocket-11761143-0a77-42be-84e2-82f06d279a0a is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:24.418: INFO: The status of Pod pod-exec-websocket-11761143-0a77-42be-84e2-82f06d279a0a is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:24.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5403" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":297,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:20.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 14 09:05:20.385: INFO: Waiting up to 5m0s for pod "pod-5221b006-3936-47c5-85de-715bfe07e66a" in namespace "emptydir-8852" to be "Succeeded or Failed" Dec 14 09:05:20.388: INFO: Pod "pod-5221b006-3936-47c5-85de-715bfe07e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075356ms Dec 14 09:05:22.392: INFO: Pod "pod-5221b006-3936-47c5-85de-715bfe07e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00713031s Dec 14 09:05:24.398: INFO: Pod "pod-5221b006-3936-47c5-85de-715bfe07e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012764126s Dec 14 09:05:26.402: INFO: Pod "pod-5221b006-3936-47c5-85de-715bfe07e66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017388955s STEP: Saw pod success Dec 14 09:05:26.402: INFO: Pod "pod-5221b006-3936-47c5-85de-715bfe07e66a" satisfied condition "Succeeded or Failed" Dec 14 09:05:26.406: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-5221b006-3936-47c5-85de-715bfe07e66a container test-container: STEP: delete the pod Dec 14 09:05:26.420: INFO: Waiting for pod pod-5221b006-3936-47c5-85de-715bfe07e66a to disappear Dec 14 09:05:26.424: INFO: Pod pod-5221b006-3936-47c5-85de-715bfe07e66a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:26.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8852" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:24.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-843f712d-43b7-4254-b0fd-e3d7d30dc70d STEP: Creating secret with name secret-projected-all-test-volume-6a6215d5-9140-47f9-9942-3799d961af9e STEP: Creating a pod to test Check all projections for projected volume plugin Dec 14 09:05:24.673: INFO: Waiting up to 5m0s for pod "projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959" in namespace "projected-6533" to be "Succeeded or Failed" Dec 14 09:05:24.676: INFO: Pod "projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485405ms Dec 14 09:05:26.681: INFO: Pod "projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007923522s STEP: Saw pod success Dec 14 09:05:26.681: INFO: Pod "projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959" satisfied condition "Succeeded or Failed" Dec 14 09:05:26.684: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959 container projected-all-volume-test: STEP: delete the pod Dec 14 09:05:26.697: INFO: Waiting for pod projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959 to disappear Dec 14 09:05:26.700: INFO: Pod projected-volume-e1a020e8-6bfd-4426-ba76-0b1f2bdf9959 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:26.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6533" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:26.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:26.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1056" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":22,"skipped":385,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:26.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:27.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3158" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":23,"skipped":393,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:17.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240 Dec 14 09:05:17.100: INFO: Pod name my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240: Found 0 pods out of 1 Dec 14 09:05:22.104: INFO: Pod name my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240: Found 1 pods out of 1 Dec 14 09:05:22.105: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240" are running Dec 14 09:05:22.107: INFO: Pod "my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240-9wrwg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:05:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:05:19 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:05:19 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-12-14 09:05:17 +0000 UTC Reason: Message:}]) Dec 14 09:05:22.107: INFO: Trying to dial the pod Dec 14 09:05:27.121: INFO: Controller my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240: Got expected result from replica 1 [my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240-9wrwg]: "my-hostname-basic-aa0d4eb0-e7d1-43cc-8311-4a7b76977240-9wrwg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:27.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1737" for this suite. • [SLOW TEST:10.067 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":21,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:21.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 14 09:05:21.696: INFO: Waiting up to 5m0s for pod "pod-e72dd95d-2318-416e-8df9-669712fda2de" in namespace "emptydir-2710" to be "Succeeded or Failed" Dec 14 09:05:21.700: INFO: Pod "pod-e72dd95d-2318-416e-8df9-669712fda2de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.59491ms Dec 14 09:05:23.705: INFO: Pod "pod-e72dd95d-2318-416e-8df9-669712fda2de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008926029s Dec 14 09:05:25.710: INFO: Pod "pod-e72dd95d-2318-416e-8df9-669712fda2de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014009326s Dec 14 09:05:27.715: INFO: Pod "pod-e72dd95d-2318-416e-8df9-669712fda2de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01876638s STEP: Saw pod success Dec 14 09:05:27.715: INFO: Pod "pod-e72dd95d-2318-416e-8df9-669712fda2de" satisfied condition "Succeeded or Failed" Dec 14 09:05:27.718: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-e72dd95d-2318-416e-8df9-669712fda2de container test-container: STEP: delete the pod Dec 14 09:05:27.734: INFO: Waiting for pod pod-e72dd95d-2318-416e-8df9-669712fda2de to disappear Dec 14 09:05:27.738: INFO: Pod pod-e72dd95d-2318-416e-8df9-669712fda2de no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:27.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2710" for this suite. • [SLOW TEST:6.098 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":123,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:16.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9577 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9577 I1214 09:05:16.366093 25 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9577, replica count: 2 I1214 09:05:19.417314 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:22.418512 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:25.419705 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:05:25.419: INFO: Creating new exec pod Dec 14 09:05:28.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9577 exec execpodjqw9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Dec 14 09:05:28.676: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Dec 14 09:05:28.676: INFO: stdout: "externalname-service-dppkf" Dec 14 09:05:28.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9577 exec execpodjqw9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.82.185 80' Dec 14 09:05:28.896: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.82.185 80\nConnection to 10.137.82.185 80 port [tcp/http] succeeded!\n" Dec 14 09:05:28.896: INFO: stdout: "externalname-service-dppkf" Dec 14 09:05:28.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9577 exec execpodjqw9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.10 32571' Dec 14 09:05:29.127: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.10 32571\nConnection to 172.25.0.10 32571 port [tcp/*] succeeded!\n" Dec 14 09:05:29.127: INFO: stdout: "externalname-service-dppkf" Dec 14 09:05:29.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9577 exec execpodjqw9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.9 32571' Dec 14 09:05:29.395: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.9 32571\nConnection to 172.25.0.9 32571 port [tcp/*] succeeded!\n" Dec 14 09:05:29.395: INFO: stdout: "externalname-service-dppkf" Dec 14 09:05:29.395: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:29.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9577" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:13.097 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":20,"skipped":413,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:27.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0748e973-9b30-4be3-a3c0-539c693cf8e9 STEP: Creating a pod to test consume configMaps Dec 14 09:05:27.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc" in namespace "configmap-4358" to be "Succeeded or Failed" Dec 14 09:05:27.816: INFO: Pod "pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902115ms Dec 14 09:05:29.821: INFO: Pod "pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008486232s STEP: Saw pod success Dec 14 09:05:29.821: INFO: Pod "pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc" satisfied condition "Succeeded or Failed" Dec 14 09:05:29.824: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc container configmap-volume-test: STEP: delete the pod Dec 14 09:05:29.839: INFO: Waiting for pod pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc to disappear Dec 14 09:05:29.842: INFO: Pod pod-configmaps-d6deefbc-dc76-4e1b-be70-ee13c13e6efc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:29.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4358" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":130,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:29.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:29.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4468" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":10,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:29.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:05:29.472: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 14 09:05:31.509: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:32.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9113" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":21,"skipped":420,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:27.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Dec 14 09:05:27.123: INFO: The status of Pod pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:29.126: INFO: The status of Pod pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 14 09:05:29.640: INFO: Successfully updated pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a" Dec 14 09:05:29.640: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a" in namespace "pods-8602" to be "terminated due to deadline exceeded" Dec 14 09:05:29.642: INFO: Pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a": Phase="Running", Reason="", readiness=true. Elapsed: 2.412534ms Dec 14 09:05:31.647: INFO: Pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a": Phase="Running", Reason="", readiness=true. Elapsed: 2.007275565s Dec 14 09:05:33.653: INFO: Pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.012852727s Dec 14 09:05:33.653: INFO: Pod "pod-update-activedeadlineseconds-666dceca-9c53-4426-bdab-bb46450b823a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:33.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8602" for this suite. • [SLOW TEST:6.582 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":409,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:26.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:34.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6706" for this suite. • [SLOW TEST:8.065 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":448,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:32.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:05:32.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899" in namespace "downward-api-3853" to be "Succeeded or Failed" Dec 14 09:05:32.590: INFO: Pod "downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899": Phase="Pending", Reason="", readiness=false. Elapsed: 3.076724ms Dec 14 09:05:34.594: INFO: Pod "downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007575411s STEP: Saw pod success Dec 14 09:05:34.594: INFO: Pod "downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899" satisfied condition "Succeeded or Failed" Dec 14 09:05:34.597: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899 container client-container: STEP: delete the pod Dec 14 09:05:34.612: INFO: Waiting for pod downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899 to disappear Dec 14 09:05:34.614: INFO: Pod downwardapi-volume-df0426e0-f78b-43d9-96f0-8b0418301899 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:34.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3853" for this suite. •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":428,"failed":0} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:34.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-4a116244-be6b-41d4-977a-006f5e02856e STEP: Creating secret with name s-test-opt-upd-13f6e847-3acc-4aa8-ab8c-279243b827a2 STEP: Creating the pod Dec 14 09:05:34.669: INFO: The status of Pod pod-projected-secrets-befe61a8-3ba2-454a-99f8-bf116c54a9ef is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:36.674: INFO: The status of Pod pod-projected-secrets-befe61a8-3ba2-454a-99f8-bf116c54a9ef is Running (Ready = true) STEP: Deleting secret s-test-opt-del-4a116244-be6b-41d4-977a-006f5e02856e STEP: Updating secret s-test-opt-upd-13f6e847-3acc-4aa8-ab8c-279243b827a2 STEP: Creating secret with name s-test-opt-create-3dd1d243-2232-4c4e-87fd-355826053727 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:38.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1321" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:12.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8456 STEP: creating service affinity-nodeport in namespace services-8456 STEP: creating replication controller affinity-nodeport in namespace services-8456 I1214 09:05:12.467030 23 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8456, replica count: 3 I1214 09:05:15.518747 23 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:18.520259 23 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:21.521129 23 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:24.521380 23 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:05:24.534: INFO: Creating new exec pod Dec 14 09:05:31.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8456 exec execpod-affinityw2mql -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Dec 14 09:05:31.824: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Dec 14 09:05:31.824: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:05:31.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8456 exec execpod-affinityw2mql -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.146.121 80' Dec 14 09:05:32.084: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.146.121 80\nConnection to 10.129.146.121 80 port [tcp/http] succeeded!\n" Dec 14 09:05:32.084: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:05:32.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8456 exec execpod-affinityw2mql -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.10 31290' Dec 14 09:05:32.336: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.10 31290\nConnection to 172.25.0.10 31290 port [tcp/*] succeeded!\n" Dec 14 09:05:32.336: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:05:32.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8456 exec execpod-affinityw2mql -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.9 31290' Dec 14 09:05:32.571: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.9 31290\nConnection to 172.25.0.9 31290 port [tcp/*] succeeded!\n" Dec 14 09:05:32.571: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:05:32.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8456 exec execpod-affinityw2mql -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.25.0.10:31290/ ; done' Dec 14 09:05:32.967: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:31290/\n" Dec 14 09:05:32.967: INFO: stdout: "\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf\naffinity-nodeport-w7vxf" Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Received response from host: affinity-nodeport-w7vxf Dec 14 09:05:32.967: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8456, will wait for the garbage collector to delete the pods Dec 14 09:05:33.034: INFO: Deleting ReplicationController affinity-nodeport took: 4.868336ms Dec 14 09:05:33.135: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.122197ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:39.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8456" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:27.453 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":207,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:29.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:05:30.800: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Dec 14 09:05:32.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:05:34.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:05:36.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069530, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:05:39.824: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:39.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3108" for this suite. STEP: Destroying namespace "webhook-3108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.993 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":157,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:38.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Dec 14 09:05:38.896: INFO: Waiting up to 5m0s for pod "var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3" in namespace "var-expansion-9991" to be "Succeeded or Failed" Dec 14 09:05:38.899: INFO: Pod "var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443118ms Dec 14 09:05:40.903: INFO: Pod "var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00697313s STEP: Saw pod success Dec 14 09:05:40.904: INFO: Pod "var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3" satisfied condition "Succeeded or Failed" Dec 14 09:05:40.907: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3 container dapi-container: STEP: delete the pod Dec 14 09:05:40.922: INFO: Waiting for pod var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3 to disappear Dec 14 09:05:40.926: INFO: Pod var-expansion-069f8062-aae4-492f-a122-cbc26503f3d3 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:40.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9991" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":476,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:33.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-64659d53-e9be-4b47-96ce-acdf4e54ae63 STEP: Creating a pod to test consume secrets Dec 14 09:05:33.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308" in namespace "projected-6238" to be "Succeeded or Failed" Dec 14 09:05:33.720: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269161ms Dec 14 09:05:35.724: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007013052s Dec 14 09:05:37.728: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01087508s Dec 14 09:05:39.732: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308": Phase="Running", Reason="", readiness=true. Elapsed: 6.014488331s Dec 14 09:05:41.735: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018350854s STEP: Saw pod success Dec 14 09:05:41.736: INFO: Pod "pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308" satisfied condition "Succeeded or Failed" Dec 14 09:05:41.739: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308 container secret-volume-test: STEP: delete the pod Dec 14 09:05:41.754: INFO: Waiting for pod pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308 to disappear Dec 14 09:05:41.756: INFO: Pod pod-projected-secrets-60abd4df-2624-4106-8e4d-a16c2db6f308 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:41.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6238" for this suite. • [SLOW TEST:8.093 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":411,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:34.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Dec 14 09:05:34.750: INFO: Waiting up to 5m0s for pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc" in namespace "svcaccounts-2018" to be "Succeeded or Failed" Dec 14 09:05:34.754: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447458ms Dec 14 09:05:36.758: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00740156s Dec 14 09:05:38.761: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010977198s Dec 14 09:05:40.765: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014632764s Dec 14 09:05:42.770: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019376254s STEP: Saw pod success Dec 14 09:05:42.770: INFO: Pod "test-pod-ad360223-daa9-4984-ae9f-b974577d50bc" satisfied condition "Succeeded or Failed" Dec 14 09:05:42.773: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod test-pod-ad360223-daa9-4984-ae9f-b974577d50bc container agnhost-container: STEP: delete the pod Dec 14 09:05:42.790: INFO: Waiting for pod test-pod-ad360223-daa9-4984-ae9f-b974577d50bc to disappear Dec 14 09:05:42.793: INFO: Pod test-pod-ad360223-daa9-4984-ae9f-b974577d50bc no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:42.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2018" for this suite. • [SLOW TEST:8.087 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":21,"skipped":500,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:40.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 14 09:05:41.019: INFO: Waiting up to 5m0s for pod "pod-ede8cbc0-7843-42e3-8978-bd22afca0e15" in namespace "emptydir-4796" to be "Succeeded or Failed" Dec 14 09:05:41.022: INFO: Pod "pod-ede8cbc0-7843-42e3-8978-bd22afca0e15": Phase="Pending", Reason="", readiness=false. Elapsed: 3.416327ms Dec 14 09:05:43.028: INFO: Pod "pod-ede8cbc0-7843-42e3-8978-bd22afca0e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009124941s STEP: Saw pod success Dec 14 09:05:43.028: INFO: Pod "pod-ede8cbc0-7843-42e3-8978-bd22afca0e15" satisfied condition "Succeeded or Failed" Dec 14 09:05:43.031: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-ede8cbc0-7843-42e3-8978-bd22afca0e15 container test-container: STEP: delete the pod Dec 14 09:05:43.045: INFO: Waiting for pod pod-ede8cbc0-7843-42e3-8978-bd22afca0e15 to disappear Dec 14 09:05:43.049: INFO: Pod pod-ede8cbc0-7843-42e3-8978-bd22afca0e15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:43.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4796" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:43.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Dec 14 09:05:43.146: INFO: created test-podtemplate-1 Dec 14 09:05:43.150: INFO: created test-podtemplate-2 Dec 14 09:05:43.153: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Dec 14 09:05:43.156: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Dec 14 09:05:43.168: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:43.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1834" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":26,"skipped":511,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:27.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-6148 [It] should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-6148 Dec 14 09:05:27.342: INFO: Found 0 stateful pods, waiting for 1 Dec 14 09:05:37.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Patch Statefulset to include a label STEP: Getting /status Dec 14 09:05:37.364: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) STEP: updating the StatefulSet Status Dec 14 09:05:37.371: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the statefulset status to be updated Dec 14 09:05:37.373: INFO: Observed &StatefulSet event: ADDED Dec 14 09:05:37.373: INFO: Found Statefulset ss in namespace statefulset-6148 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Dec 14 09:05:37.373: INFO: Statefulset ss has an updated status STEP: patching the Statefulset Status Dec 14 09:05:37.373: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Dec 14 09:05:37.379: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} STEP: watching for the Statefulset status to be patched Dec 14 09:05:37.381: INFO: Observed &StatefulSet event: ADDED [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:05:37.381: INFO: Deleting all statefulset in ns statefulset-6148 Dec 14 09:05:37.384: INFO: Scaling statefulset ss to 0 Dec 14 09:05:47.399: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:05:47.403: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:47.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6148" for this suite. • [SLOW TEST:20.134 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":22,"skipped":329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:39.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:05:39.916: INFO: Waiting up to 5m0s for pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70" in namespace "projected-1365" to be "Succeeded or Failed" Dec 14 09:05:39.918: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517618ms Dec 14 09:05:41.924: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008151745s Dec 14 09:05:43.928: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012227013s Dec 14 09:05:45.933: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016938246s Dec 14 09:05:47.937: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021720965s STEP: Saw pod success Dec 14 09:05:47.938: INFO: Pod "downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70" satisfied condition "Succeeded or Failed" Dec 14 09:05:47.941: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70 container client-container: STEP: delete the pod Dec 14 09:05:47.953: INFO: Waiting for pod downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70 to disappear Dec 14 09:05:47.956: INFO: Pod downwardapi-volume-903555a5-156b-463f-ad19-80b977e80b70 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:47.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1365" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:40.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Dec 14 09:05:40.041: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:49.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8039" for this suite. • [SLOW TEST:9.542 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":12,"skipped":167,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:48.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-be8fe373-e96f-49e4-b8d2-aa501057c73a STEP: Creating a pod to test consume configMaps Dec 14 09:05:48.058: INFO: Waiting up to 5m0s for pod "pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef" in namespace "configmap-3263" to be "Succeeded or Failed" Dec 14 09:05:48.061: INFO: Pod "pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941655ms Dec 14 09:05:50.066: INFO: Pod "pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008159747s STEP: Saw pod success Dec 14 09:05:50.066: INFO: Pod "pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef" satisfied condition "Succeeded or Failed" Dec 14 09:05:50.069: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef container agnhost-container: STEP: delete the pod Dec 14 09:05:50.086: INFO: Waiting for pod pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef to disappear Dec 14 09:05:50.090: INFO: Pod pod-configmaps-6806e97b-9097-4d09-a5c1-e659279eaeef no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:50.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3263" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":236,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:23.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-2msb STEP: Creating a pod to test atomic-volume-subpath Dec 14 09:05:24.045: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2msb" in namespace "subpath-1263" to be "Succeeded or Failed" Dec 14 09:05:24.048: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.500205ms Dec 14 09:05:26.053: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007971827s Dec 14 09:05:28.058: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013434421s Dec 14 09:05:30.063: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 6.018216416s Dec 14 09:05:32.067: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 8.02214218s Dec 14 09:05:34.073: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 10.028005772s Dec 14 09:05:36.078: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 12.03298133s Dec 14 09:05:38.083: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 14.038373248s Dec 14 09:05:40.088: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 16.043079593s Dec 14 09:05:42.093: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 18.048352403s Dec 14 09:05:44.097: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 20.05258664s Dec 14 09:05:46.101: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 22.056064444s Dec 14 09:05:48.106: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Running", Reason="", readiness=true. Elapsed: 24.060936952s Dec 14 09:05:50.111: INFO: Pod "pod-subpath-test-configmap-2msb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.065929908s STEP: Saw pod success Dec 14 09:05:50.111: INFO: Pod "pod-subpath-test-configmap-2msb" satisfied condition "Succeeded or Failed" Dec 14 09:05:50.114: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-subpath-test-configmap-2msb container test-container-subpath-configmap-2msb: STEP: delete the pod Dec 14 09:05:50.132: INFO: Waiting for pod pod-subpath-test-configmap-2msb to disappear Dec 14 09:05:50.135: INFO: Pod pod-subpath-test-configmap-2msb no longer exists STEP: Deleting pod pod-subpath-test-configmap-2msb Dec 14 09:05:50.135: INFO: Deleting pod "pod-subpath-test-configmap-2msb" in namespace "subpath-1263" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:50.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1263" for this suite. • [SLOW TEST:26.148 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":184,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:47.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-eb30a49f-6372-4703-8c8d-9ba987292dbb STEP: Creating a pod to test consume secrets Dec 14 09:05:47.570: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824" in namespace "projected-8079" to be "Succeeded or Failed" Dec 14 09:05:47.573: INFO: Pod "pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378603ms Dec 14 09:05:49.578: INFO: Pod "pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824": Phase="Running", Reason="", readiness=true. Elapsed: 2.00810711s Dec 14 09:05:51.583: INFO: Pod "pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013224834s STEP: Saw pod success Dec 14 09:05:51.583: INFO: Pod "pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824" satisfied condition "Succeeded or Failed" Dec 14 09:05:51.589: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824 container projected-secret-volume-test: STEP: delete the pod Dec 14 09:05:51.604: INFO: Waiting for pod pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824 to disappear Dec 14 09:05:51.607: INFO: Pod pod-projected-secrets-ba6d0c2a-b0a8-4389-bb6a-97ebeca7f824 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:51.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8079" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":370,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:31.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d in namespace container-probe-3508 Dec 14 09:03:33.501: INFO: Started pod liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d in namespace container-probe-3508 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:03:33.505: INFO: Initial restart count of pod liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is 0 Dec 14 09:03:53.552: INFO: Restart count of pod container-probe-3508/liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is now 1 (20.047511237s elapsed) Dec 14 09:04:13.594: INFO: Restart count of pod container-probe-3508/liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is now 2 (40.088796965s elapsed) Dec 14 09:04:33.642: INFO: Restart count of pod container-probe-3508/liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is now 3 (1m0.137393199s elapsed) Dec 14 09:04:57.693: INFO: Restart count of pod container-probe-3508/liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is now 4 (1m24.18789068s elapsed) Dec 14 09:05:53.817: INFO: Restart count of pod container-probe-3508/liveness-3a95c255-feb1-4fbf-a1d9-36fe21681b7d is now 5 (2m20.311892535s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:53.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3508" for this suite. • [SLOW TEST:142.390 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:53.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Dec 14 09:05:54.452: INFO: created pod pod-service-account-defaultsa Dec 14 09:05:54.452: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 14 09:05:54.456: INFO: created pod pod-service-account-mountsa Dec 14 09:05:54.456: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 14 09:05:54.461: INFO: created pod pod-service-account-nomountsa Dec 14 09:05:54.461: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 14 09:05:54.465: INFO: created pod pod-service-account-defaultsa-mountspec Dec 14 09:05:54.465: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 14 09:05:54.470: INFO: created pod pod-service-account-mountsa-mountspec Dec 14 09:05:54.470: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 14 09:05:54.473: INFO: created pod pod-service-account-nomountsa-mountspec Dec 14 09:05:54.474: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 14 09:05:54.477: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 14 09:05:54.477: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 14 09:05:54.480: INFO: created pod pod-service-account-mountsa-nomountspec Dec 14 09:05:54.480: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 14 09:05:54.484: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 14 09:05:54.484: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:54.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-487" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":9,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:54.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:54.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8412" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":10,"skipped":238,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:43.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1700 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1700 I1214 09:05:43.256737 25 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1700, replica count: 2 I1214 09:05:46.307465 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:49.308120 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:05:49.308: INFO: Creating new exec pod Dec 14 09:05:54.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1700 exec execpod7gxs2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Dec 14 09:05:54.602: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Dec 14 09:05:54.602: INFO: stdout: "externalname-service-f2kmp" Dec 14 09:05:54.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1700 exec execpod7gxs2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.80.205 80' Dec 14 09:05:54.853: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.80.205 80\nConnection to 10.130.80.205 80 port [tcp/http] succeeded!\n" Dec 14 09:05:54.853: INFO: stdout: "externalname-service-f2kmp" Dec 14 09:05:54.853: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1700" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:11.678 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":27,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:50.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:05:50.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b" in namespace "downward-api-3009" to be "Succeeded or Failed" Dec 14 09:05:50.177: INFO: Pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173033ms Dec 14 09:05:52.182: INFO: Pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008307536s Dec 14 09:05:54.188: INFO: Pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014061219s Dec 14 09:05:56.194: INFO: Pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019977662s STEP: Saw pod success Dec 14 09:05:56.194: INFO: Pod "downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b" satisfied condition "Succeeded or Failed" Dec 14 09:05:56.198: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b container client-container: STEP: delete the pod Dec 14 09:05:56.214: INFO: Waiting for pod downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b to disappear Dec 14 09:05:56.218: INFO: Pod downwardapi-volume-9d899e72-adc7-4142-bc5f-6ff9e422343b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:05:56.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3009" for this suite. • [SLOW TEST:6.091 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":248,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:54.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-0acc8f5b-f59a-4abc-9110-d238fe28bd2c STEP: Creating a pod to test consume configMaps Dec 14 09:05:54.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27" in namespace "projected-6172" to be "Succeeded or Failed" Dec 14 09:05:54.656: INFO: Pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.599537ms Dec 14 09:05:56.660: INFO: Pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006751318s Dec 14 09:05:58.664: INFO: Pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010733921s Dec 14 09:06:00.668: INFO: Pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014776369s STEP: Saw pod success Dec 14 09:06:00.668: INFO: Pod "pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27" satisfied condition "Succeeded or Failed" Dec 14 09:06:00.671: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27 container agnhost-container: STEP: delete the pod Dec 14 09:06:00.685: INFO: Waiting for pod pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27 to disappear Dec 14 09:06:00.689: INFO: Pod pod-projected-configmaps-265cadf6-cdd6-4242-ba36-7b65bd41fc27 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6172" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":240,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:41.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Dec 14 09:05:41.838: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:01.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-26" for this suite. • [SLOW TEST:19.986 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":26,"skipped":426,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:00.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Dec 14 09:06:01.741: INFO: starting watch STEP: patching STEP: updating Dec 14 09:06:01.751: INFO: waiting for watch events with expected annotations Dec 14 09:06:01.751: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:01.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-7471" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":12,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:04:59.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:04:59.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Dec 14 09:05:01.733: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:01Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:01Z]] name:name1 resourceVersion:13948488 uid:5dcfc599-b163-4667-90f8-53dce421016b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Dec 14 09:05:11.740: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:11Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:11Z]] name:name2 resourceVersion:13948885 uid:656ee618-c222-4d49-a1f4-32e5dae78eba] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Dec 14 09:05:21.747: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:21Z]] name:name1 resourceVersion:13949269 uid:5dcfc599-b163-4667-90f8-53dce421016b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Dec 14 09:05:31.754: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:31Z]] name:name2 resourceVersion:13949652 uid:656ee618-c222-4d49-a1f4-32e5dae78eba] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Dec 14 09:05:41.760: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:01Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:21Z]] name:name1 resourceVersion:13950073 uid:5dcfc599-b163-4667-90f8-53dce421016b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Dec 14 09:05:51.769: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-12-14T09:05:11Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-12-14T09:05:31Z]] name:name2 resourceVersion:13950454 uid:656ee618-c222-4d49-a1f4-32e5dae78eba] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:02.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5924" for this suite. • [SLOW TEST:63.170 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":11,"skipped":235,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:02.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6107" for this suite. • [SLOW TEST:13.130 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":13,"skipped":173,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:01.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:01.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-3568 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:08.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-3801" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:08.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3568" for this suite. • [SLOW TEST:6.139 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":27,"skipped":475,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:02.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Dec 14 09:06:02.756: INFO: created test-pod-1 Dec 14 09:06:02.759: INFO: created test-pod-2 Dec 14 09:06:02.763: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted Dec 14 09:06:02.780: INFO: Pod quantity 3 is different from expected quantity 0 Dec 14 09:06:03.786: INFO: Pod quantity 3 is different from expected quantity 0 Dec 14 09:06:04.785: INFO: Pod quantity 3 is different from expected quantity 0 Dec 14 09:06:05.784: INFO: Pod quantity 3 is different from expected quantity 0 Dec 14 09:06:06.784: INFO: Pod quantity 1 is different from expected quantity 0 Dec 14 09:06:07.784: INFO: Pod quantity 1 is different from expected quantity 0 Dec 14 09:06:08.785: INFO: Pod quantity 1 is different from expected quantity 0 Dec 14 09:06:09.785: INFO: Pod quantity 1 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:10.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-517" for this suite. • [SLOW TEST:8.083 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":14,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:54.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Dec 14 09:05:54.994: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:10.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6612" for this suite. • [SLOW TEST:15.991 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:01.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Dec 14 09:06:01.926: INFO: Waiting up to 5m0s for pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb" in namespace "emptydir-3889" to be "Succeeded or Failed" Dec 14 09:06:01.929: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.998304ms Dec 14 09:06:03.933: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00736653s Dec 14 09:06:05.938: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012019187s Dec 14 09:06:07.944: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017825349s Dec 14 09:06:09.949: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022740413s Dec 14 09:06:11.953: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027300639s STEP: Saw pod success Dec 14 09:06:11.954: INFO: Pod "pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb" satisfied condition "Succeeded or Failed" Dec 14 09:06:11.957: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb container test-container: STEP: delete the pod Dec 14 09:06:11.973: INFO: Waiting for pod pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb to disappear Dec 14 09:06:11.976: INFO: Pod pod-b3d9b0c4-74b7-4d83-a6b9-9d4939e737fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:11.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3889" for this suite. • [SLOW TEST:10.107 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":273,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:11.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:06:12.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74" in namespace "downward-api-5440" to be "Succeeded or Failed" Dec 14 09:06:12.042: INFO: Pod "downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88931ms Dec 14 09:06:14.048: INFO: Pod "downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008436232s STEP: Saw pod success Dec 14 09:06:14.048: INFO: Pod "downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74" satisfied condition "Succeeded or Failed" Dec 14 09:06:14.052: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74 container client-container: STEP: delete the pod Dec 14 09:06:14.069: INFO: Waiting for pod downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74 to disappear Dec 14 09:06:14.072: INFO: Pod downwardapi-volume-50429995-edcd-4e0f-bd8e-e21e13950a74 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:14.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5440" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":274,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:02.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 14 09:06:02.347: INFO: Waiting up to 5m0s for pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad" in namespace "emptydir-4861" to be "Succeeded or Failed" Dec 14 09:06:02.350: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398247ms Dec 14 09:06:04.355: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007082264s Dec 14 09:06:06.358: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010903593s Dec 14 09:06:08.363: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015758138s Dec 14 09:06:10.367: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019940398s Dec 14 09:06:12.372: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024257384s Dec 14 09:06:14.377: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.029092536s STEP: Saw pod success Dec 14 09:06:14.377: INFO: Pod "pod-5b958831-6d97-482c-ba8d-e4eef26f77ad" satisfied condition "Succeeded or Failed" Dec 14 09:06:14.381: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-5b958831-6d97-482c-ba8d-e4eef26f77ad container test-container: STEP: delete the pod Dec 14 09:06:14.396: INFO: Waiting for pod pod-5b958831-6d97-482c-ba8d-e4eef26f77ad to disappear Dec 14 09:06:14.400: INFO: Pod pod-5b958831-6d97-482c-ba8d-e4eef26f77ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:14.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4861" for this suite. • [SLOW TEST:12.106 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:10.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:06:10.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30" in namespace "downward-api-183" to be "Succeeded or Failed" Dec 14 09:06:10.988: INFO: Pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.138191ms Dec 14 09:06:12.993: INFO: Pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008125013s Dec 14 09:06:14.999: INFO: Pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30": Phase="Running", Reason="", readiness=true. Elapsed: 4.014243477s Dec 14 09:06:17.003: INFO: Pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018109727s STEP: Saw pod success Dec 14 09:06:17.003: INFO: Pod "downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30" satisfied condition "Succeeded or Failed" Dec 14 09:06:17.006: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30 container client-container: STEP: delete the pod Dec 14 09:06:17.019: INFO: Waiting for pod downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30 to disappear Dec 14 09:06:17.023: INFO: Pod downwardapi-volume-9e9c9bf9-b668-48f4-9975-cc2bc7d05e30 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:17.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-183" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":232,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:08.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Dec 14 09:06:08.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:10.136: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:12.136: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Dec 14 09:06:12.146: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:14.151: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 14 09:06:14.165: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 14 09:06:14.169: INFO: Pod pod-with-poststart-http-hook still exists Dec 14 09:06:16.169: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 14 09:06:16.173: INFO: Pod pod-with-poststart-http-hook still exists Dec 14 09:06:18.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 14 09:06:18.175: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:18.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1301" for this suite. • [SLOW TEST:10.104 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:14.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 14 09:06:14.166: INFO: Waiting up to 5m0s for pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37" in namespace "emptydir-2415" to be "Succeeded or Failed" Dec 14 09:06:14.170: INFO: Pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09822ms Dec 14 09:06:16.174: INFO: Pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007566921s Dec 14 09:06:18.179: INFO: Pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011978903s Dec 14 09:06:20.184: INFO: Pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017469084s STEP: Saw pod success Dec 14 09:06:20.184: INFO: Pod "pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37" satisfied condition "Succeeded or Failed" Dec 14 09:06:20.188: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37 container test-container: STEP: delete the pod Dec 14 09:06:20.205: INFO: Waiting for pod pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37 to disappear Dec 14 09:06:20.210: INFO: Pod pod-cc7003dd-7e78-4f54-9e60-9a6cd1a31b37 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:20.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2415" for this suite. • [SLOW TEST:6.097 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":289,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:18.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:06:18.369: INFO: The status of Pod busybox-readonly-fs10b066c0-25c6-4ca6-b6df-26ba740cea67 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:20.372: INFO: The status of Pod busybox-readonly-fs10b066c0-25c6-4ca6-b6df-26ba740cea67 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:20.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3364" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":543,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:20.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-c9488adc-c3c5-4f7e-8194-b3a1b0da0164 STEP: Creating a pod to test consume secrets Dec 14 09:06:20.351: INFO: Waiting up to 5m0s for pod "pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78" in namespace "secrets-995" to be "Succeeded or Failed" Dec 14 09:06:20.355: INFO: Pod "pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78": Phase="Pending", Reason="", readiness=false. Elapsed: 3.139489ms Dec 14 09:06:22.360: INFO: Pod "pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008018042s STEP: Saw pod success Dec 14 09:06:22.360: INFO: Pod "pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78" satisfied condition "Succeeded or Failed" Dec 14 09:06:22.363: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78 container secret-volume-test: STEP: delete the pod Dec 14 09:06:22.379: INFO: Waiting for pod pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78 to disappear Dec 14 09:06:22.387: INFO: Pod pod-secrets-aa629a43-3765-41c2-ad79-757c08cbab78 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:22.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-995" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":309,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:14.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:06:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 14 09:06:18.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 --namespace=crd-publish-openapi-7195 create -f -' Dec 14 09:06:18.747: INFO: stderr: "" Dec 14 09:06:18.747: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 14 09:06:18.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 --namespace=crd-publish-openapi-7195 delete e2e-test-crd-publish-openapi-6907-crds test-cr' Dec 14 09:06:18.863: INFO: stderr: "" Dec 14 09:06:18.863: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Dec 14 09:06:18.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 --namespace=crd-publish-openapi-7195 apply -f -' Dec 14 09:06:19.103: INFO: stderr: "" Dec 14 09:06:19.103: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Dec 14 09:06:19.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 --namespace=crd-publish-openapi-7195 delete e2e-test-crd-publish-openapi-6907-crds test-cr' Dec 14 09:06:19.218: INFO: stderr: "" Dec 14 09:06:19.218: INFO: stdout: "e2e-test-crd-publish-openapi-6907-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Dec 14 09:06:19.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7195 explain e2e-test-crd-publish-openapi-6907-crds' Dec 14 09:06:19.434: INFO: stderr: "" Dec 14 09:06:19.434: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6907-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:23.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7195" for this suite. • [SLOW TEST:8.553 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":13,"skipped":290,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:23.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:06:23.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d" in namespace "projected-7594" to be "Succeeded or Failed" Dec 14 09:06:23.178: INFO: Pod "downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.403401ms Dec 14 09:06:25.183: INFO: Pod "downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008678613s STEP: Saw pod success Dec 14 09:06:25.184: INFO: Pod "downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d" satisfied condition "Succeeded or Failed" Dec 14 09:06:25.187: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d container client-container: STEP: delete the pod Dec 14 09:06:25.204: INFO: Waiting for pod downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d to disappear Dec 14 09:06:25.207: INFO: Pod downwardapi-volume-e63edb34-f084-412f-ada6-e32bf110fa8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:25.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7594" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:25.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Dec 14 09:06:25.487: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Dec 14 09:06:25.506: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:25.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-6312" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":15,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:25.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:25.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1216" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":16,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:56.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted Dec 14 09:06:16.386: INFO: EndpointSlice for Service endpointslice-3766/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:26.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-3766" for this suite. • [SLOW TEST:30.149 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":20,"skipped":258,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:25.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:06:25.728: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1482290a-63d1-48ba-b169-e6422dbea0f8" in namespace "security-context-test-1614" to be "Succeeded or Failed" Dec 14 09:06:25.731: INFO: Pod "busybox-privileged-false-1482290a-63d1-48ba-b169-e6422dbea0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.264522ms Dec 14 09:06:27.736: INFO: Pod "busybox-privileged-false-1482290a-63d1-48ba-b169-e6422dbea0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00739709s Dec 14 09:06:27.736: INFO: Pod "busybox-privileged-false-1482290a-63d1-48ba-b169-e6422dbea0f8" satisfied condition "Succeeded or Failed" Dec 14 09:06:27.741: INFO: Got logs for pod "busybox-privileged-false-1482290a-63d1-48ba-b169-e6422dbea0f8": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:27.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1614" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":439,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:20.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Dec 14 09:06:26.970: INFO: Successfully updated pod "adopt-release--1-hg74j" STEP: Checking that the Job readopts the Pod Dec 14 09:06:26.970: INFO: Waiting up to 15m0s for pod "adopt-release--1-hg74j" in namespace "job-6352" to be "adopted" Dec 14 09:06:26.974: INFO: Pod "adopt-release--1-hg74j": Phase="Running", Reason="", readiness=true. Elapsed: 3.59509ms Dec 14 09:06:28.979: INFO: Pod "adopt-release--1-hg74j": Phase="Running", Reason="", readiness=true. Elapsed: 2.008661127s Dec 14 09:06:28.979: INFO: Pod "adopt-release--1-hg74j" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Dec 14 09:06:29.492: INFO: Successfully updated pod "adopt-release--1-hg74j" STEP: Checking that the Job releases the Pod Dec 14 09:06:29.493: INFO: Waiting up to 15m0s for pod "adopt-release--1-hg74j" in namespace "job-6352" to be "released" Dec 14 09:06:29.496: INFO: Pod "adopt-release--1-hg74j": Phase="Running", Reason="", readiness=true. Elapsed: 3.388123ms Dec 14 09:06:31.500: INFO: Pod "adopt-release--1-hg74j": Phase="Running", Reason="", readiness=true. Elapsed: 2.007134772s Dec 14 09:06:31.500: INFO: Pod "adopt-release--1-hg74j" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:31.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6352" for this suite. • [SLOW TEST:11.096 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":30,"skipped":551,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:31.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 14 09:06:31.555: INFO: Waiting up to 5m0s for pod "pod-880d4813-bd33-4787-a358-8bb425017f26" in namespace "emptydir-6794" to be "Succeeded or Failed" Dec 14 09:06:31.558: INFO: Pod "pod-880d4813-bd33-4787-a358-8bb425017f26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.520045ms Dec 14 09:06:33.562: INFO: Pod "pod-880d4813-bd33-4787-a358-8bb425017f26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007294245s Dec 14 09:06:35.568: INFO: Pod "pod-880d4813-bd33-4787-a358-8bb425017f26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012785914s Dec 14 09:06:37.573: INFO: Pod "pod-880d4813-bd33-4787-a358-8bb425017f26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01735677s STEP: Saw pod success Dec 14 09:06:37.573: INFO: Pod "pod-880d4813-bd33-4787-a358-8bb425017f26" satisfied condition "Succeeded or Failed" Dec 14 09:06:37.576: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-880d4813-bd33-4787-a358-8bb425017f26 container test-container: STEP: delete the pod Dec 14 09:06:37.594: INFO: Waiting for pod pod-880d4813-bd33-4787-a358-8bb425017f26 to disappear Dec 14 09:06:37.597: INFO: Pod pod-880d4813-bd33-4787-a358-8bb425017f26 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:37.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6794" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":556,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:37.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:37.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1788" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":32,"skipped":576,"failed":0} [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:37.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:37.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9002" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":33,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:11.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Dec 14 09:06:11.050: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:13.054: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:15.056: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:17.054: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.25.0.9 on the node which pod1 resides and expect scheduled Dec 14 09:06:17.063: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:19.068: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:21.082: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:23.072: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.25.0.9 but use UDP protocol on the node which pod2 resides Dec 14 09:06:23.081: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:25.087: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:27.086: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:29.087: INFO: The status of Pod pod3 is Running (Ready = false) Dec 14 09:06:31.087: INFO: The status of Pod pod3 is Running (Ready = true) Dec 14 09:06:31.095: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:33.100: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Dec 14 09:06:33.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.25.0.9 http://127.0.0.1:54323/hostname] Namespace:hostport-1418 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:06:33.104: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.25.0.9, port: 54323 Dec 14 09:06:33.270: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.25.0.9:54323/hostname] Namespace:hostport-1418 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:06:33.270: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.25.0.9, port: 54323 UDP Dec 14 09:06:33.423: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.25.0.9 54323] Namespace:hostport-1418 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:06:33.423: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:38.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-1418" for this suite. • [SLOW TEST:27.596 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":574,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:42.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-7585 Dec 14 09:05:42.867: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:44.875: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:46.871: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:48.873: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:05:50.872: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Dec 14 09:05:50.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Dec 14 09:05:51.112: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Dec 14 09:05:51.112: INFO: stdout: "iptables" Dec 14 09:05:51.112: INFO: proxyMode: iptables Dec 14 09:05:51.120: INFO: Waiting for pod kube-proxy-mode-detector to disappear Dec 14 09:05:51.124: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7585 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7585 I1214 09:05:51.142353 16 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7585, replica count: 3 I1214 09:05:54.194419 16 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:05:57.195066 16 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:06:00.196602 16 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:06:00.203: INFO: Creating new exec pod Dec 14 09:06:15.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec execpod-affinityjd8xr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Dec 14 09:06:15.444: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Dec 14 09:06:15.444: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:15.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec execpod-affinityjd8xr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.156.17 80' Dec 14 09:06:15.669: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.156.17 80\nConnection to 10.143.156.17 80 port [tcp/http] succeeded!\n" Dec 14 09:06:15.669: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:15.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec execpod-affinityjd8xr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.143.156.17:80/ ; done' Dec 14 09:06:16.106: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n" Dec 14 09:06:16.106: INFO: stdout: "\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr\naffinity-clusterip-timeout-qpffr" Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Received response from host: affinity-clusterip-timeout-qpffr Dec 14 09:06:16.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec execpod-affinityjd8xr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.143.156.17:80/' Dec 14 09:06:16.323: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n" Dec 14 09:06:16.323: INFO: stdout: "affinity-clusterip-timeout-qpffr" Dec 14 09:06:36.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7585 exec execpod-affinityjd8xr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.143.156.17:80/' Dec 14 09:06:36.572: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.143.156.17:80/\n" Dec 14 09:06:36.572: INFO: stdout: "affinity-clusterip-timeout-jprzg" Dec 14 09:06:36.572: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7585, will wait for the garbage collector to delete the pods Dec 14 09:06:36.637: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.030045ms Dec 14 09:06:36.738: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.060921ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:39.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7585" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:56.735 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":508,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:27.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6897 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6897 STEP: creating replication controller externalsvc in namespace services-6897 I1214 09:06:27.851683 19 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6897, replica count: 2 I1214 09:06:30.903372 19 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:06:33.905119 19 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Dec 14 09:06:33.927: INFO: Creating new exec pod Dec 14 09:06:37.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6897 exec execpodgwn6q -- /bin/sh -x -c nslookup clusterip-service.services-6897.svc.cluster.local' Dec 14 09:06:38.188: INFO: stderr: "+ nslookup clusterip-service.services-6897.svc.cluster.local\n" Dec 14 09:06:38.188: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-6897.svc.cluster.local\tcanonical name = externalsvc.services-6897.svc.cluster.local.\nName:\texternalsvc.services-6897.svc.cluster.local\nAddress: 10.143.119.153\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6897, will wait for the garbage collector to delete the pods Dec 14 09:06:38.249: INFO: Deleting ReplicationController externalsvc took: 6.748844ms Dec 14 09:06:38.350: INFO: Terminating ReplicationController externalsvc pods took: 100.822361ms Dec 14 09:06:42.567: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:42.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6897" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:14.798 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":18,"skipped":453,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:39.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-5ed928ef-520e-477b-a5e3-830dfa48d852 STEP: Creating a pod to test consume configMaps Dec 14 09:06:39.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458" in namespace "configmap-478" to be "Succeeded or Failed" Dec 14 09:06:39.613: INFO: Pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.750327ms Dec 14 09:06:41.618: INFO: Pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007872545s Dec 14 09:06:43.622: INFO: Pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011731045s Dec 14 09:06:45.628: INFO: Pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017286288s STEP: Saw pod success Dec 14 09:06:45.628: INFO: Pod "pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458" satisfied condition "Succeeded or Failed" Dec 14 09:06:45.632: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458 container agnhost-container: STEP: delete the pod Dec 14 09:06:45.646: INFO: Waiting for pod pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458 to disappear Dec 14 09:06:45.652: INFO: Pod pod-configmaps-11bc0f46-67d6-4472-aee3-df0ee78e0458 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:45.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-478" for this suite. • [SLOW TEST:6.094 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:26.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-1387 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-1387 Dec 14 09:06:26.456: INFO: Found 0 stateful pods, waiting for 1 Dec 14 09:06:36.460: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:06:36.484: INFO: Deleting all statefulset in ns statefulset-1387 Dec 14 09:06:36.487: INFO: Scaling statefulset ss to 0 Dec 14 09:06:46.501: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:06:46.505: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:46.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1387" for this suite. • [SLOW TEST:20.116 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":21,"skipped":259,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:42.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-8b1f391b-daac-49ea-8983-c797790bc624 STEP: Creating a pod to test consume configMaps Dec 14 09:06:42.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27" in namespace "configmap-653" to be "Succeeded or Failed" Dec 14 09:06:42.640: INFO: Pod "pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.856961ms Dec 14 09:06:44.645: INFO: Pod "pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174066s Dec 14 09:06:46.650: INFO: Pod "pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012040677s STEP: Saw pod success Dec 14 09:06:46.650: INFO: Pod "pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27" satisfied condition "Succeeded or Failed" Dec 14 09:06:46.653: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27 container agnhost-container: STEP: delete the pod Dec 14 09:06:46.669: INFO: Waiting for pod pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27 to disappear Dec 14 09:06:46.672: INFO: Pod pod-configmaps-f7b21015-2e8b-4db1-b1b4-83afedd4af27 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:46.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-653" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":456,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:45.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:06:45.826: INFO: The status of Pod pod-secrets-58dc0551-d331-4a87-8771-77459d822873 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:47.830: INFO: The status of Pod pod-secrets-58dc0551-d331-4a87-8771-77459d822873 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:47.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6024" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":24,"skipped":555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:37.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-1919 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1919 STEP: Deleting pre-stop pod Dec 14 09:06:57.013: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:57.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1919" for this suite. • [SLOW TEST:19.109 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":34,"skipped":625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:57.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 14 09:06:57.214: INFO: Waiting up to 5m0s for pod "pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5" in namespace "emptydir-4654" to be "Succeeded or Failed" Dec 14 09:06:57.222: INFO: Pod "pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498186ms Dec 14 09:06:59.226: INFO: Pod "pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012210763s STEP: Saw pod success Dec 14 09:06:59.226: INFO: Pod "pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5" satisfied condition "Succeeded or Failed" Dec 14 09:06:59.229: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5 container test-container: STEP: delete the pod Dec 14 09:06:59.246: INFO: Waiting for pod pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5 to disappear Dec 14 09:06:59.249: INFO: Pod pod-8f04b401-c5b9-4ce4-be7c-8e9b475d99b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:06:59.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4654" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":672,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:50.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-8332 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8332 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8332 Dec 14 09:05:50.234: INFO: Found 0 stateful pods, waiting for 1 Dec 14 09:06:00.240: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 14 09:06:00.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:06:00.513: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:06:00.513: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:06:00.513: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:06:00.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 14 09:06:10.522: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:06:10.522: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:06:10.539: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999515s Dec 14 09:06:11.544: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995710717s Dec 14 09:06:12.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991200955s Dec 14 09:06:13.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986604083s Dec 14 09:06:14.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981393251s Dec 14 09:06:15.562: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976900922s Dec 14 09:06:16.566: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972770981s Dec 14 09:06:17.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968756034s Dec 14 09:06:18.575: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964005967s Dec 14 09:06:19.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.958102ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8332 Dec 14 09:06:20.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:06:20.870: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:06:20.870: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:06:20.870: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:06:20.874: INFO: Found 1 stateful pods, waiting for 3 Dec 14 09:06:30.882: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:06:30.882: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:06:30.882: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 14 09:06:30.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:06:31.152: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:06:31.152: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:06:31.152: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:06:31.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:06:31.423: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:06:31.423: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:06:31.423: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:06:31.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:06:31.636: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:06:31.636: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:06:31.637: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:06:31.637: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:06:31.639: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Dec 14 09:06:41.648: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:06:41.648: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:06:41.648: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:06:41.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999697s Dec 14 09:06:42.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99607524s Dec 14 09:06:43.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98960107s Dec 14 09:06:44.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985304632s Dec 14 09:06:45.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980823609s Dec 14 09:06:46.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975839521s Dec 14 09:06:47.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972773046s Dec 14 09:06:48.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.967688943s Dec 14 09:06:49.698: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962791352s Dec 14 09:06:50.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.459637ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8332 Dec 14 09:06:51.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:06:51.997: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:06:51.997: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:06:51.997: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:06:51.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:06:52.301: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:06:52.301: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:06:52.301: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:06:52.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-8332 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:06:52.590: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:06:52.590: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:06:52.590: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:06:52.590: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:07:02.607: INFO: Deleting all statefulset in ns statefulset-8332 Dec 14 09:07:02.611: INFO: Scaling statefulset ss to 0 Dec 14 09:07:02.623: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:07:02.626: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:02.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8332" for this suite. • [SLOW TEST:72.458 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":16,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:17.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8091 STEP: creating service affinity-nodeport-transition in namespace services-8091 STEP: creating replication controller affinity-nodeport-transition in namespace services-8091 I1214 09:06:17.114026 43 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8091, replica count: 3 I1214 09:06:20.165418 43 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1214 09:06:23.166551 43 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:06:23.179: INFO: Creating new exec pod Dec 14 09:06:30.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Dec 14 09:06:30.469: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Dec 14 09:06:30.469: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:30.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.12.177 80' Dec 14 09:06:30.696: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.12.177 80\nConnection to 10.133.12.177 80 port [tcp/http] succeeded!\n" Dec 14 09:06:30.697: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:30.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.10 30797' Dec 14 09:06:30.950: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.10 30797\nConnection to 172.25.0.10 30797 port [tcp/*] succeeded!\n" Dec 14 09:06:30.950: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:30.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.9 30797' Dec 14 09:06:31.179: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.9 30797\nConnection to 172.25.0.9 30797 port [tcp/*] succeeded!\n" Dec 14 09:06:31.179: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:06:31.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.25.0.10:30797/ ; done' Dec 14 09:06:31.615: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n" Dec 14 09:06:31.616: INFO: stdout: "\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj" Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:06:31.616: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:01.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.25.0.10:30797/ ; done' Dec 14 09:07:02.014: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n" Dec 14 09:07:02.014: INFO: stdout: "\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-kfmzc\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-kfmzc\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-kfmzc\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-kfmzc\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-c75ws\naffinity-nodeport-transition-kfmzc" Dec 14 09:07:02.014: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.014: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.014: INFO: Received response from host: affinity-nodeport-transition-kfmzc Dec 14 09:07:02.014: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.014: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-kfmzc Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-kfmzc Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-kfmzc Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-c75ws Dec 14 09:07:02.015: INFO: Received response from host: affinity-nodeport-transition-kfmzc Dec 14 09:07:02.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8091 exec execpod-affinityxs8w6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.25.0.10:30797/ ; done' Dec 14 09:07:02.441: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:30797/\n" Dec 14 09:07:02.441: INFO: stdout: "\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj\naffinity-nodeport-transition-hlrlj" Dec 14 09:07:02.441: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.441: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.441: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.441: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Received response from host: affinity-nodeport-transition-hlrlj Dec 14 09:07:02.442: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8091, will wait for the garbage collector to delete the pods Dec 14 09:07:02.512: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.865139ms Dec 14 09:07:02.613: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.912472ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:04.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8091" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:47.573 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":245,"failed":0} S ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:02.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:02.767: INFO: Creating pod... Dec 14 09:07:02.778: INFO: Pod Quantity: 1 Status: Pending Dec 14 09:07:03.783: INFO: Pod Quantity: 1 Status: Pending Dec 14 09:07:04.784: INFO: Pod Status: Running Dec 14 09:07:04.784: INFO: Creating service... Dec 14 09:07:04.795: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/DELETE Dec 14 09:07:04.801: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Dec 14 09:07:04.802: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/GET Dec 14 09:07:04.805: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Dec 14 09:07:04.805: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/HEAD Dec 14 09:07:04.809: INFO: http.Client request:HEAD | StatusCode:200 Dec 14 09:07:04.809: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/OPTIONS Dec 14 09:07:04.812: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Dec 14 09:07:04.812: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/PATCH Dec 14 09:07:04.816: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Dec 14 09:07:04.816: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/POST Dec 14 09:07:04.819: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Dec 14 09:07:04.820: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/pods/agnhost/proxy/some/path/with/PUT Dec 14 09:07:04.823: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Dec 14 09:07:04.823: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/DELETE Dec 14 09:07:04.827: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Dec 14 09:07:04.827: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/GET Dec 14 09:07:04.832: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Dec 14 09:07:04.832: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/HEAD Dec 14 09:07:04.837: INFO: http.Client request:HEAD | StatusCode:200 Dec 14 09:07:04.837: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/OPTIONS Dec 14 09:07:04.842: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Dec 14 09:07:04.842: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/PATCH Dec 14 09:07:04.847: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Dec 14 09:07:04.847: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/POST Dec 14 09:07:04.852: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Dec 14 09:07:04.852: INFO: Starting http.Client for https://172.25.0.6:6443/api/v1/namespaces/proxy-3591/services/test-service/proxy/some/path/with/PUT Dec 14 09:07:04.857: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:04.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3591" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":17,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:38.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8125 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 14 09:06:38.651: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Dec 14 09:06:38.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:40.681: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:42.680: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:06:44.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:46.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:48.681: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:50.681: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:52.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:54.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:56.681: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:06:58.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:00.682: INFO: The status of Pod netserver-0 is Running (Ready = true) Dec 14 09:07:00.689: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Dec 14 09:07:02.719: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Dec 14 09:07:02.719: INFO: Going to poll 192.168.1.39 on port 8081 at least 0 times, with a maximum of 34 tries before failing Dec 14 09:07:02.722: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.39 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8125 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:07:02.722: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:07:03.831: INFO: Found all 1 expected endpoints: [netserver-0] Dec 14 09:07:03.831: INFO: Going to poll 192.168.2.230 on port 8081 at least 0 times, with a maximum of 34 tries before failing Dec 14 09:07:03.835: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.230 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8125 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:07:03.835: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:07:04.950: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:04.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8125" for this suite. • [SLOW TEST:26.341 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":580,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-9481 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9481 STEP: Waiting until pod test-pod will start running in namespace statefulset-9481 STEP: Creating statefulset with conflicting port in namespace statefulset-9481 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9481 Dec 14 09:06:52.784: INFO: Observed stateful pod in namespace: statefulset-9481, name: ss-0, uid: 4979146a-e141-4888-98f3-48bc728226c8, status phase: Pending. Waiting for statefulset controller to delete. Dec 14 09:06:53.474: INFO: Observed stateful pod in namespace: statefulset-9481, name: ss-0, uid: 4979146a-e141-4888-98f3-48bc728226c8, status phase: Failed. Waiting for statefulset controller to delete. Dec 14 09:06:53.484: INFO: Observed stateful pod in namespace: statefulset-9481, name: ss-0, uid: 4979146a-e141-4888-98f3-48bc728226c8, status phase: Failed. Waiting for statefulset controller to delete. Dec 14 09:06:53.487: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9481 STEP: Removing pod with conflicting port in namespace statefulset-9481 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9481 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:06:57.508: INFO: Deleting all statefulset in ns statefulset-9481 Dec 14 09:06:57.512: INFO: Scaling statefulset ss to 0 Dec 14 09:07:07.531: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:07:07.534: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:07.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9481" for this suite. • [SLOW TEST:20.847 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":20,"skipped":467,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:47.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:06:48.379: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:06:50.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:06:52.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069608, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:06:55.410: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:07.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3755" for this suite. STEP: Destroying namespace "webhook-3755-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.647 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":25,"skipped":582,"failed":0} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:07.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:07.610: INFO: Creating simple deployment test-new-deployment Dec 14 09:07:07.622: INFO: deployment "test-new-deployment" doesn't have the required revision set STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:07:09.660: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-6079 a2e2e299-f492-4e88-8392-ea973d61c72d 13952764 3 2021-12-14 09:07:07 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2021-12-14 09:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00991e8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-12-14 09:07:09 +0000 UTC,LastTransitionTime:2021-12-14 09:07:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-12-14 09:07:09 +0000 UTC,LastTransitionTime:2021-12-14 09:07:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 14 09:07:09.664: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-6079 4805caab-f7f0-4e27-8691-8e2f5b11fd4d 13952769 2 2021-12-14 09:07:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment a2e2e299-f492-4e88-8392-ea973d61c72d 0xc006796327 0xc006796328}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2e2e299-f492-4e88-8392-ea973d61c72d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:07:09 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0067963b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:07:09.667: INFO: Pod "test-new-deployment-847dcfb7fb-qbwkq" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-qbwkq test-new-deployment-847dcfb7fb- deployment-6079 b63050d2-631b-49f9-a930-96bdb69b9920 13952768 0 2021-12-14 09:07:09 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4805caab-f7f0-4e27-8691-8e2f5b11fd4d 0xc006796777 0xc006796778}] [] [{kube-controller-manager Update v1 2021-12-14 09:07:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805caab-f7f0-4e27-8691-8e2f5b11fd4d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vk7r9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vk7r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:07:09.667: INFO: Pod "test-new-deployment-847dcfb7fb-sn689" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-sn689 test-new-deployment-847dcfb7fb- deployment-6079 71277f06-f5bd-4729-8c88-c20d4097fb6f 13952756 0 2021-12-14 09:07:07 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4805caab-f7f0-4e27-8691-8e2f5b11fd4d 0xc0067968d7 0xc0067968d8}] [] [{kube-controller-manager Update v1 2021-12-14 09:07:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4805caab-f7f0-4e27-8691-8e2f5b11fd4d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:07:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ffz2v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ffz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.239,StartTime:2021-12-14 09:07:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:07:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://3b61494985836f5ea21d831e9f5c5b6cf566f8d84945559f8c80fcbb48c71976,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:09.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6079" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":26,"skipped":582,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:09.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Dec 14 09:07:09.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7057 api-versions' Dec 14 09:07:09.833: INFO: stderr: "" Dec 14 09:07:09.833: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nlitmuschaos.io/v1alpha1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:09.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7057" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":27,"skipped":590,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:04.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:11.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1987" for this suite. • [SLOW TEST:7.054 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":17,"skipped":246,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:07.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create a Replicaset STEP: Verify that the required pods have come up. Dec 14 09:07:07.680: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 14 09:07:12.684: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Getting /status Dec 14 09:07:12.690: INFO: Replicaset test-rs has Conditions: [] STEP: updating the Replicaset Status Dec 14 09:07:12.696: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the ReplicaSet status to be updated Dec 14 09:07:12.698: INFO: Observed &ReplicaSet event: ADDED Dec 14 09:07:12.698: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.699: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.699: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.699: INFO: Found replicaset test-rs in namespace replicaset-2567 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Dec 14 09:07:12.699: INFO: Replicaset test-rs has an updated status STEP: patching the Replicaset Status Dec 14 09:07:12.699: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Dec 14 09:07:12.704: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} STEP: watching for the Replicaset status to be patched Dec 14 09:07:12.706: INFO: Observed &ReplicaSet event: ADDED Dec 14 09:07:12.706: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.707: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.707: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.707: INFO: Observed replicaset test-rs in namespace replicaset-2567 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Dec 14 09:07:12.707: INFO: Observed &ReplicaSet event: MODIFIED Dec 14 09:07:12.707: INFO: Found replicaset test-rs in namespace replicaset-2567 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Dec 14 09:07:12.707: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:12.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2567" for this suite. • [SLOW TEST:5.071 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":21,"skipped":502,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:11.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:11.779: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"590a99ac-8690-4fd7-9439-63d595560e92", Controller:(*bool)(0xc0051f60c2), BlockOwnerDeletion:(*bool)(0xc0051f60c3)}} Dec 14 09:07:11.784: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2b9d7d65-178d-4ffd-af86-0fccd3f38ec5", Controller:(*bool)(0xc0059d04ba), BlockOwnerDeletion:(*bool)(0xc0059d04bb)}} Dec 14 09:07:11.788: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"52e61303-4b47-4dc9-b169-79efa0eea5be", Controller:(*bool)(0xc004ed3fe2), BlockOwnerDeletion:(*bool)(0xc004ed3fe3)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:16.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9065" for this suite. • [SLOW TEST:5.084 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":18,"skipped":252,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:09.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:09.912: INFO: The status of Pod server-envvars-15fd7cdf-530e-47ca-8c00-412385ab61a0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:07:11.917: INFO: The status of Pod server-envvars-15fd7cdf-530e-47ca-8c00-412385ab61a0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:07:13.916: INFO: The status of Pod server-envvars-15fd7cdf-530e-47ca-8c00-412385ab61a0 is Running (Ready = true) Dec 14 09:07:13.938: INFO: Waiting up to 5m0s for pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab" in namespace "pods-8088" to be "Succeeded or Failed" Dec 14 09:07:13.941: INFO: Pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662022ms Dec 14 09:07:15.946: INFO: Pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007575562s Dec 14 09:07:17.950: INFO: Pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011846434s Dec 14 09:07:19.956: INFO: Pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017670721s STEP: Saw pod success Dec 14 09:07:19.956: INFO: Pod "client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab" satisfied condition "Succeeded or Failed" Dec 14 09:07:19.960: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab container env3cont: STEP: delete the pod Dec 14 09:07:19.978: INFO: Waiting for pod client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab to disappear Dec 14 09:07:19.981: INFO: Pod client-envvars-a61f3b3c-0de7-49e1-a374-0fefe736d9ab no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:19.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8088" for this suite. • [SLOW TEST:10.117 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":602,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:16.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:16.856: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d6ffe0e1-2cbf-4820-ae16-128d61656a59" in namespace "security-context-test-9295" to be "Succeeded or Failed" Dec 14 09:07:16.859: INFO: Pod "busybox-user-65534-d6ffe0e1-2cbf-4820-ae16-128d61656a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.990603ms Dec 14 09:07:18.864: INFO: Pod "busybox-user-65534-d6ffe0e1-2cbf-4820-ae16-128d61656a59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00825004s Dec 14 09:07:20.868: INFO: Pod "busybox-user-65534-d6ffe0e1-2cbf-4820-ae16-128d61656a59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011870336s Dec 14 09:07:20.868: INFO: Pod "busybox-user-65534-d6ffe0e1-2cbf-4820-ae16-128d61656a59" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:20.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9295" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:12.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:12.758: INFO: Pod name sample-pod: Found 0 pods out of 1 Dec 14 09:07:17.763: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Dec 14 09:07:19.784: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet Dec 14 09:07:19.799: INFO: observed ReplicaSet test-rs in namespace replicaset-4733 with ReadyReplicas 1, AvailableReplicas 1 Dec 14 09:07:19.809: INFO: observed ReplicaSet test-rs in namespace replicaset-4733 with ReadyReplicas 1, AvailableReplicas 1 Dec 14 09:07:19.818: INFO: observed ReplicaSet test-rs in namespace replicaset-4733 with ReadyReplicas 1, AvailableReplicas 1 Dec 14 09:07:19.828: INFO: observed ReplicaSet test-rs in namespace replicaset-4733 with ReadyReplicas 1, AvailableReplicas 1 Dec 14 09:07:21.347: INFO: observed ReplicaSet test-rs in namespace replicaset-4733 with ReadyReplicas 2, AvailableReplicas 2 Dec 14 09:07:21.943: INFO: observed Replicaset test-rs in namespace replicaset-4733 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:21.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4733" for this suite. • [SLOW TEST:9.240 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":22,"skipped":506,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:21.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment Dec 14 09:07:22.011: INFO: Creating simple deployment test-deployment-6sxdm Dec 14 09:07:22.020: INFO: deployment "test-deployment-6sxdm" doesn't have the required revision set STEP: Getting /status Dec 14 09:07:24.041: INFO: Deployment test-deployment-6sxdm has Conditions: [{Available True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6sxdm-794dd694d8" has successfully progressed.}] STEP: updating Deployment Status Dec 14 09:07:24.051: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069643, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069643, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069643, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069642, loc:(*time.Location)(0xa09bc80)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-6sxdm-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Deployment status to be updated Dec 14 09:07:24.053: INFO: Observed &Deployment event: ADDED Dec 14 09:07:24.053: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6sxdm-794dd694d8"} Dec 14 09:07:24.053: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.053: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6sxdm-794dd694d8"} Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Dec 14 09:07:24.054: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-6sxdm-794dd694d8" is progressing.} Dec 14 09:07:24.054: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6sxdm-794dd694d8" has successfully progressed.} Dec 14 09:07:24.054: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Dec 14 09:07:24.054: INFO: Observed Deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6sxdm-794dd694d8" has successfully progressed.} Dec 14 09:07:24.054: INFO: Found Deployment test-deployment-6sxdm in namespace deployment-4688 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Dec 14 09:07:24.054: INFO: Deployment test-deployment-6sxdm has an updated status STEP: patching the Statefulset Status Dec 14 09:07:24.055: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Dec 14 09:07:24.061: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} STEP: watching for the Deployment status to be patched Dec 14 09:07:24.064: INFO: Observed &Deployment event: ADDED Dec 14 09:07:24.064: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6sxdm-794dd694d8"} Dec 14 09:07:24.065: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.065: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6sxdm-794dd694d8"} Dec 14 09:07:24.065: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Dec 14 09:07:24.065: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.065: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Dec 14 09:07:24.065: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:22 +0000 UTC 2021-12-14 09:07:22 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-6sxdm-794dd694d8" is progressing.} Dec 14 09:07:24.065: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.065: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Dec 14 09:07:24.066: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6sxdm-794dd694d8" has successfully progressed.} Dec 14 09:07:24.066: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.066: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Dec 14 09:07:24.066: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2021-12-14 09:07:23 +0000 UTC 2021-12-14 09:07:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6sxdm-794dd694d8" has successfully progressed.} Dec 14 09:07:24.066: INFO: Observed deployment test-deployment-6sxdm in namespace deployment-4688 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Dec 14 09:07:24.066: INFO: Observed &Deployment event: MODIFIED Dec 14 09:07:24.066: INFO: Found deployment test-deployment-6sxdm in namespace deployment-4688 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Dec 14 09:07:24.066: INFO: Deployment test-deployment-6sxdm has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:07:24.070: INFO: Deployment "test-deployment-6sxdm": &Deployment{ObjectMeta:{test-deployment-6sxdm deployment-4688 1bd6bd61-83e4-44d7-abd2-cb2d82f189a3 13953230 1 2021-12-14 09:07:22 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-12-14 09:07:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2021-12-14 09:07:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2021-12-14 09:07:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005434f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-6sxdm-794dd694d8",LastUpdateTime:2021-12-14 09:07:24 +0000 UTC,LastTransitionTime:2021-12-14 09:07:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 14 09:07:24.074: INFO: New ReplicaSet "test-deployment-6sxdm-794dd694d8" of Deployment "test-deployment-6sxdm": &ReplicaSet{ObjectMeta:{test-deployment-6sxdm-794dd694d8 deployment-4688 a8b58cd6-26de-4875-b273-7b3ce8c8d31d 13953215 1 2021-12-14 09:07:22 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-6sxdm 1bd6bd61-83e4-44d7-abd2-cb2d82f189a3 0xc0053e1e87 0xc0053e1e88}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:07:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1bd6bd61-83e4-44d7-abd2-cb2d82f189a3\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:07:23 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0053e1f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:07:24.078: INFO: Pod "test-deployment-6sxdm-794dd694d8-8p8fn" is available: &Pod{ObjectMeta:{test-deployment-6sxdm-794dd694d8-8p8fn test-deployment-6sxdm-794dd694d8- deployment-4688 c7f05ab1-032e-48fe-8051-3941ea30d535 13953214 0 2021-12-14 09:07:22 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [{apps/v1 ReplicaSet test-deployment-6sxdm-794dd694d8 a8b58cd6-26de-4875-b273-7b3ce8c8d31d 0xc0047dab27 0xc0047dab28}] [] [{kube-controller-manager Update v1 2021-12-14 09:07:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a8b58cd6-26de-4875-b273-7b3ce8c8d31d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:07:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z2m9b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2m9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:07:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.245,StartTime:2021-12-14 09:07:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:07:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://0711acf28c4ad8244e09f951f6a7d7e4c684780efbf5670d6c41433c7a73d549,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:24.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4688" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":23,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:20.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9405 STEP: creating service affinity-clusterip-transition in namespace services-9405 STEP: creating replication controller affinity-clusterip-transition in namespace services-9405 I1214 09:07:20.068113 16 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9405, replica count: 3 I1214 09:07:23.119860 16 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:07:23.127: INFO: Creating new exec pod Dec 14 09:07:26.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9405 exec execpod-affinity9qtkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Dec 14 09:07:26.414: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Dec 14 09:07:26.414: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:07:26.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9405 exec execpod-affinity9qtkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.174.69 80' Dec 14 09:07:26.673: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.174.69 80\nConnection to 10.130.174.69 80 port [tcp/http] succeeded!\n" Dec 14 09:07:26.673: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:07:26.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9405 exec execpod-affinity9qtkq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.130.174.69:80/ ; done' Dec 14 09:07:27.104: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n" Dec 14 09:07:27.104: INFO: stdout: "\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-2dvcl\naffinity-clusterip-transition-2dvcl\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-2dvcl\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-2dvcl\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-dd78g\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-2dvcl" Dec 14 09:07:27.104: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.104: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-2dvcl Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-2dvcl Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-2dvcl Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-2dvcl Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-dd78g Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.105: INFO: Received response from host: affinity-clusterip-transition-2dvcl Dec 14 09:07:27.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9405 exec execpod-affinity9qtkq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.130.174.69:80/ ; done' Dec 14 09:07:27.528: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.130.174.69:80/\n" Dec 14 09:07:27.529: INFO: stdout: "\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz\naffinity-clusterip-transition-74tcz" Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Received response from host: affinity-clusterip-transition-74tcz Dec 14 09:07:27.529: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9405, will wait for the garbage collector to delete the pods Dec 14 09:07:27.597: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.284547ms Dec 14 09:07:27.697: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.205561ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:29.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9405" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:9.611 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":609,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:29.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6110" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":30,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:59.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8663.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8663.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8663.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 14 09:07:01.336: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.340: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.344: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.348: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.362: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.371: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.374: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:01.383: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:06.389: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.394: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.398: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.402: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.414: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.419: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.423: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.427: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:06.435: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:11.389: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.393: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.397: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.401: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.411: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.415: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.419: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.423: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:11.432: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:16.388: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.393: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.397: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.401: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.413: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.417: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.421: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.425: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:16.432: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:21.387: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.391: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.394: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.398: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.409: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.412: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.416: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.419: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:21.426: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:26.389: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.394: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.398: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.402: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.415: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.418: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.423: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.427: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local from pod dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e: the server could not find the requested resource (get pods dns-test-9899c567-673a-4547-a0ab-16fe177c425e) Dec 14 09:07:26.434: INFO: Lookups using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8663.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8663.svc.cluster.local jessie_udp@dns-test-service-2.dns-8663.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8663.svc.cluster.local] Dec 14 09:07:31.436: INFO: DNS probes using dns-8663/dns-test-9899c567-673a-4547-a0ab-16fe177c425e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:31.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8663" for this suite. • [SLOW TEST:32.194 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":36,"skipped":676,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:04.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:33.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3083" for this suite. • [SLOW TEST:28.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":18,"skipped":259,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:31.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:35.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2783" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:29.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 14 09:07:29.800: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 14 09:07:34.804: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:35.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3431" for this suite. • [SLOW TEST:6.073 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":31,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:35.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-1822/secret-test-09dfa37a-8bd0-4406-a1e3-330647becfc6 STEP: Creating a pod to test consume secrets Dec 14 09:07:35.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070" in namespace "secrets-1822" to be "Succeeded or Failed" Dec 14 09:07:35.955: INFO: Pod "pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070": Phase="Pending", Reason="", readiness=false. Elapsed: 3.468407ms Dec 14 09:07:37.960: INFO: Pod "pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008177368s STEP: Saw pod success Dec 14 09:07:37.960: INFO: Pod "pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070" satisfied condition "Succeeded or Failed" Dec 14 09:07:37.964: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070 container env-test: STEP: delete the pod Dec 14 09:07:37.981: INFO: Waiting for pod pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070 to disappear Dec 14 09:07:37.985: INFO: Pod pod-configmaps-e104e700-be0a-4d69-8137-924ee7cb6070 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:37.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1822" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":673,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:33.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:44.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1943" for this suite. • [SLOW TEST:11.134 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":19,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:46.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:46.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3483" for this suite. • [SLOW TEST:60.056 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:24.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-8300 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 14 09:07:24.177: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Dec 14 09:07:24.202: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:07:26.206: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:28.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:30.207: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:32.207: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:34.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:36.206: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:38.206: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:40.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:42.207: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:44.208: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:07:46.208: INFO: The status of Pod netserver-0 is Running (Ready = true) Dec 14 09:07:46.216: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Dec 14 09:07:48.250: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Dec 14 09:07:48.250: INFO: Going to poll 192.168.1.54 on port 8083 at least 0 times, with a maximum of 34 tries before failing Dec 14 09:07:48.254: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.54:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8300 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:07:48.254: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:07:48.408: INFO: Found all 1 expected endpoints: [netserver-0] Dec 14 09:07:48.408: INFO: Going to poll 192.168.2.246 on port 8083 at least 0 times, with a maximum of 34 tries before failing Dec 14 09:07:48.412: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.246:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8300 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:07:48.412: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:07:48.558: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:48.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8300" for this suite. • [SLOW TEST:24.427 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":535,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":261,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:46.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Dec 14 09:07:46.641: INFO: Waiting up to 5m0s for pod "var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0" in namespace "var-expansion-4817" to be "Succeeded or Failed" Dec 14 09:07:46.645: INFO: Pod "var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573911ms Dec 14 09:07:48.648: INFO: Pod "var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007111695s STEP: Saw pod success Dec 14 09:07:48.649: INFO: Pod "var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0" satisfied condition "Succeeded or Failed" Dec 14 09:07:48.652: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0 container dapi-container: STEP: delete the pod Dec 14 09:07:48.664: INFO: Waiting for pod var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0 to disappear Dec 14 09:07:48.666: INFO: Pod var-expansion-a1ad9f83-52f0-471d-b677-b8c8dbc8b8f0 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:48.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4817" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:48.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:07:48.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe" in namespace "downward-api-4819" to be "Succeeded or Failed" Dec 14 09:07:48.640: INFO: Pod "downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.800436ms Dec 14 09:07:50.645: INFO: Pod "downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009292366s STEP: Saw pod success Dec 14 09:07:50.645: INFO: Pod "downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe" satisfied condition "Succeeded or Failed" Dec 14 09:07:50.650: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe container client-container: STEP: delete the pod Dec 14 09:07:50.668: INFO: Waiting for pod downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe to disappear Dec 14 09:07:50.674: INFO: Pod downwardapi-volume-e43693a8-9fdb-47a0-b682-a31486368ebe no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:50.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4819" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":538,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:35.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:51.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1683" for this suite. • [SLOW TEST:16.144 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":38,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:48.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:52.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7927" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":24,"skipped":282,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:50.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Dec 14 09:07:53.291: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5244 pod-service-account-2af28669-f1a9-4eb1-ac09-6712fd3630c0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Dec 14 09:07:53.523: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5244 pod-service-account-2af28669-f1a9-4eb1-ac09-6712fd3630c0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Dec 14 09:07:53.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5244 pod-service-account-2af28669-f1a9-4eb1-ac09-6712fd3630c0 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:54.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5244" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":26,"skipped":548,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:04.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Dec 14 09:07:05.014: INFO: PodSpec: initContainers in spec.initContainers Dec 14 09:07:55.120: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dffc326c-1751-416c-8897-d1cbc055d20c", GenerateName:"", Namespace:"init-container-648", SelfLink:"", UID:"2ed524e9-9e81-4509-9793-3ab7dcac3fd9", ResourceVersion:"13953929", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63775069625, loc:(*time.Location)(0xa09bc80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"14947059"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ef1d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ef1d58), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ef1d70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ef1d88), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-w877c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004c08200), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-w877c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-w877c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-w877c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003cbf0d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"capi-v1.22-md-0-698f477975-vkd62", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00167dd50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cbf150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cbf170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003cbf178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003cbf17c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0004de470), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069625, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069625, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069625, loc:(*time.Location)(0xa09bc80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069625, loc:(*time.Location)(0xa09bc80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.25.0.9", PodIP:"192.168.2.238", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.238"}}, StartTime:(*v1.Time)(0xc004ef1db8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00167de30)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00167dea0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://ffb6bb69d7b982d5d321d608e7476d349ed18b397c851d85e88b286cda9a9067", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004c082a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004c08280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc003cbf1ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-648" for this suite. • [SLOW TEST:50.147 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":31,"skipped":587,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:51.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:07:52.235: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:07:55.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:55.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8493" for this suite. STEP: Destroying namespace "webhook-8493-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":39,"skipped":737,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:55.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching Dec 14 09:07:55.425: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Dec 14 09:07:55.430: INFO: starting watch STEP: patching STEP: updating Dec 14 09:07:55.446: INFO: waiting for watch events with expected annotations Dec 14 09:07:55.446: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:55.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2890" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":40,"skipped":745,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:55.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Dec 14 09:07:55.190: INFO: Waiting up to 5m0s for pod "var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3" in namespace "var-expansion-2658" to be "Succeeded or Failed" Dec 14 09:07:55.193: INFO: Pod "var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.497116ms Dec 14 09:07:57.198: INFO: Pod "var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0079094s STEP: Saw pod success Dec 14 09:07:57.198: INFO: Pod "var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3" satisfied condition "Succeeded or Failed" Dec 14 09:07:57.201: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3 container dapi-container: STEP: delete the pod Dec 14 09:07:57.216: INFO: Waiting for pod var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3 to disappear Dec 14 09:07:57.219: INFO: Pod var-expansion-0a940f25-8b71-4123-82b4-cc3e2f50f9d3 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:57.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2658" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":594,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:54.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:54.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 create -f -' Dec 14 09:07:54.392: INFO: stderr: "" Dec 14 09:07:54.392: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Dec 14 09:07:54.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 create -f -' Dec 14 09:07:54.627: INFO: stderr: "" Dec 14 09:07:54.627: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Dec 14 09:07:55.632: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:07:55.632: INFO: Found 0 / 1 Dec 14 09:07:56.633: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:07:56.633: INFO: Found 1 / 1 Dec 14 09:07:56.633: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 14 09:07:56.636: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:07:56.637: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 14 09:07:56.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 describe pod agnhost-primary-xvvh4' Dec 14 09:07:56.765: INFO: stderr: "" Dec 14 09:07:56.765: INFO: stdout: "Name: agnhost-primary-xvvh4\nNamespace: kubectl-4105\nPriority: 0\nNode: capi-v1.22-md-0-698f477975-vkd62/172.25.0.9\nStart Time: Tue, 14 Dec 2021 09:07:54 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 192.168.2.5\nIPs:\n IP: 192.168.2.5\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://b657ed43a42c623eadf1621d084e36c8dd077e95262b229a88a12a6128e4e915\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 14 Dec 2021 09:07:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-25hjd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-25hjd:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-4105/agnhost-primary-xvvh4 to capi-v1.22-md-0-698f477975-vkd62\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Dec 14 09:07:56.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 describe rc agnhost-primary' Dec 14 09:07:56.894: INFO: stderr: "" Dec 14 09:07:56.894: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4105\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-xvvh4\n" Dec 14 09:07:56.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 describe service agnhost-primary' Dec 14 09:07:57.011: INFO: stderr: "" Dec 14 09:07:57.011: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4105\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.128.181.254\nIPs: 10.128.181.254\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.2.5:6379\nSession Affinity: None\nEvents: \n" Dec 14 09:07:57.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 describe node capi-v1.22-control-plane-jzh89' Dec 14 09:07:57.172: INFO: stderr: "" Dec 14 09:07:57.172: INFO: stdout: "Name: capi-v1.22-control-plane-jzh89\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=capi-v1.22-control-plane-jzh89\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: cluster.x-k8s.io/cluster-name: capi-v1.22\n cluster.x-k8s.io/cluster-namespace: default\n cluster.x-k8s.io/machine: capi-v1.22-control-plane-jzh89\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: capi-v1.22-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Mon, 30 Aug 2021 13:31:40 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: capi-v1.22-control-plane-jzh89\n AcquireTime: \n RenewTime: Tue, 14 Dec 2021 09:07:55 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 14 Dec 2021 09:07:21 +0000 Mon, 30 Aug 2021 13:31:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 14 Dec 2021 09:07:21 +0000 Mon, 30 Aug 2021 13:31:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 14 Dec 2021 09:07:21 +0000 Mon, 30 Aug 2021 13:31:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 14 Dec 2021 09:07:21 +0000 Mon, 30 Aug 2021 13:55:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.25.0.6\n Hostname: capi-v1.22-control-plane-jzh89\nCapacity:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nAllocatable:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nSystem Info:\n Machine ID: eff5f013f7b94f9a8450e311639e601f\n System UUID: d27caf1d-3186-4666-9e36-fc5a4a46c7da\n Boot ID: 23e6d5fc-ffea-44fa-b860-8360f8cb5e12\n Kernel Version: 5.4.0-73-generic\n OS Image: Ubuntu Impish Indri (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.5\n Kubelet Version: v1.22.0\n Kube-Proxy Version: v1.22.0\nPodCIDR: 192.168.0.0/24\nPodCIDRs: 192.168.0.0/24\nProviderID: docker:////capi-v1.22-control-plane-jzh89\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default falco-6874k 100m (0%) 1 (1%) 512Mi (0%) 1Gi (1%) 94d\n kube-system create-loop-devs-dlp5v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system etcd-capi-v1.22-control-plane-jzh89 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 105d\n kube-system kindnet-r6nc4 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 105d\n kube-system kube-apiserver-capi-v1.22-control-plane-jzh89 250m (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-controller-manager-capi-v1.22-control-plane-jzh89 200m (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-proxy-srhx4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system kube-scheduler-capi-v1.22-control-plane-jzh89 100m (0%) 0 (0%) 0 (0%) 0 (0%) 105d\n kube-system tune-sysctls-76nv6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 105d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (0%) 1100m (1%)\n memory 662Mi (1%) 1074Mi (1%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Dec 14 09:07:57.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4105 describe namespace kubectl-4105' Dec 14 09:07:57.288: INFO: stderr: "" Dec 14 09:07:57.288: INFO: stdout: "Name: kubectl-4105\nLabels: e2e-framework=kubectl\n e2e-run=a0aaad78-c2f1-4e1a-a829-9a999f802979\n kubernetes.io/metadata.name=kubectl-4105\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:57.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4105" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":27,"skipped":550,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:57.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:07:57.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5707" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":28,"skipped":551,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:03:03.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:03.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5069" for this suite. • [SLOW TEST:300.065 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":3,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:52.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:07:52.918: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Dec 14 09:07:56.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 create -f -' Dec 14 09:07:57.075: INFO: stderr: "" Dec 14 09:07:57.075: INFO: stdout: "e2e-test-crd-publish-openapi-9183-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 14 09:07:57.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 delete e2e-test-crd-publish-openapi-9183-crds test-foo' Dec 14 09:07:57.191: INFO: stderr: "" Dec 14 09:07:57.191: INFO: stdout: "e2e-test-crd-publish-openapi-9183-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Dec 14 09:07:57.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 apply -f -' Dec 14 09:07:57.405: INFO: stderr: "" Dec 14 09:07:57.405: INFO: stdout: "e2e-test-crd-publish-openapi-9183-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Dec 14 09:07:57.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 delete e2e-test-crd-publish-openapi-9183-crds test-foo' Dec 14 09:07:57.518: INFO: stderr: "" Dec 14 09:07:57.518: INFO: stdout: "e2e-test-crd-publish-openapi-9183-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Dec 14 09:07:57.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 create -f -' Dec 14 09:07:57.717: INFO: rc: 1 Dec 14 09:07:57.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 apply -f -' Dec 14 09:07:57.934: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Dec 14 09:07:57.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 create -f -' Dec 14 09:07:58.143: INFO: rc: 1 Dec 14 09:07:58.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 --namespace=crd-publish-openapi-3590 apply -f -' Dec 14 09:07:58.351: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Dec 14 09:07:58.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 explain e2e-test-crd-publish-openapi-9183-crds' Dec 14 09:07:58.573: INFO: stderr: "" Dec 14 09:07:58.573: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9183-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Dec 14 09:07:58.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 explain e2e-test-crd-publish-openapi-9183-crds.metadata' Dec 14 09:07:58.795: INFO: stderr: "" Dec 14 09:07:58.795: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9183-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Dec 14 09:07:58.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 explain e2e-test-crd-publish-openapi-9183-crds.spec' Dec 14 09:07:59.017: INFO: stderr: "" Dec 14 09:07:59.017: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9183-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Dec 14 09:07:59.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 explain e2e-test-crd-publish-openapi-9183-crds.spec.bars' Dec 14 09:07:59.246: INFO: stderr: "" Dec 14 09:07:59.246: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9183-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Dec 14 09:07:59.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3590 explain e2e-test-crd-publish-openapi-9183-crds.spec.bars2' Dec 14 09:07:59.467: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:03.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3590" for this suite. • [SLOW TEST:10.400 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":25,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:03.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Dec 14 09:08:03.280: INFO: The status of Pod pod-hostip-b1dd3ca9-0ca0-4eca-aadb-e4ecc158d6f4 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:08:05.286: INFO: The status of Pod pod-hostip-b1dd3ca9-0ca0-4eca-aadb-e4ecc158d6f4 is Running (Ready = true) Dec 14 09:08:05.294: INFO: Pod pod-hostip-b1dd3ca9-0ca0-4eca-aadb-e4ecc158d6f4 has hostIP: 172.25.0.10 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:05.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2676" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":171,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:57.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:07:58.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:08:00.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069678, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069678, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069678, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069678, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:08:03.172: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:08:03.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2556-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:06.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3693" for this suite. STEP: Destroying namespace "webhook-3693-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":29,"skipped":570,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:44.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-059c5bfd-47eb-4cf6-adb8-a844d89256a1 in namespace container-probe-6919 Dec 14 09:07:46.361: INFO: Started pod liveness-059c5bfd-47eb-4cf6-adb8-a844d89256a1 in namespace container-probe-6919 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:07:46.365: INFO: Initial restart count of pod liveness-059c5bfd-47eb-4cf6-adb8-a844d89256a1 is 0 Dec 14 09:08:06.412: INFO: Restart count of pod container-probe-6919/liveness-059c5bfd-47eb-4cf6-adb8-a844d89256a1 is now 1 (20.047002926s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:06.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6919" for this suite. • [SLOW TEST:22.123 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":312,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:06.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 14 09:08:06.471: INFO: Waiting up to 5m0s for pod "pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7" in namespace "emptydir-8303" to be "Succeeded or Failed" Dec 14 09:08:06.474: INFO: Pod "pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881916ms Dec 14 09:08:08.478: INFO: Pod "pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00704361s Dec 14 09:08:10.482: INFO: Pod "pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011321098s STEP: Saw pod success Dec 14 09:08:10.483: INFO: Pod "pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7" satisfied condition "Succeeded or Failed" Dec 14 09:08:10.486: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7 container test-container: STEP: delete the pod Dec 14 09:08:10.778: INFO: Waiting for pod pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7 to disappear Dec 14 09:08:10.782: INFO: Pod pod-bb6c51a6-88a9-4ee7-b4ff-6474df36d5f7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:10.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8303" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":316,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:57.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:13.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6655" for this suite. • [SLOW TEST:16.129 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":33,"skipped":604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:06.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-8240 STEP: creating service affinity-clusterip in namespace services-8240 STEP: creating replication controller affinity-clusterip in namespace services-8240 I1214 09:08:06.414433 19 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8240, replica count: 3 I1214 09:08:09.464896 19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:08:09.472: INFO: Creating new exec pod Dec 14 09:08:14.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8240 exec execpod-affinity8tpl6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Dec 14 09:08:14.802: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Dec 14 09:08:14.802: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:08:14.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8240 exec execpod-affinity8tpl6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.203.102 80' Dec 14 09:08:15.033: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.203.102 80\nConnection to 10.133.203.102 80 port [tcp/http] succeeded!\n" Dec 14 09:08:15.033: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:08:15.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8240 exec execpod-affinity8tpl6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.133.203.102:80/ ; done' Dec 14 09:08:15.404: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.203.102:80/\n" Dec 14 09:08:15.404: INFO: stdout: "\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x\naffinity-clusterip-m529x" Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Received response from host: affinity-clusterip-m529x Dec 14 09:08:15.404: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8240, will wait for the garbage collector to delete the pods Dec 14 09:08:15.475: INFO: Deleting ReplicationController affinity-clusterip took: 6.487775ms Dec 14 09:08:15.576: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.930516ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:20.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8240" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:14.426 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":576,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:13.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Dec 14 09:08:14.393: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Dec 14 09:08:16.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069694, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069694, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069694, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069694, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:08:19.422: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:08:19.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:22.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6372" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.930 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":34,"skipped":723,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:20.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:08:20.847: INFO: Creating deployment "test-recreate-deployment" Dec 14 09:08:20.852: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 14 09:08:20.860: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 14 09:08:22.872: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 14 09:08:22.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069700, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069700, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069700, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069700, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:08:24.881: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 14 09:08:24.892: INFO: Updating deployment test-recreate-deployment Dec 14 09:08:24.892: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:08:24.958: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5176 694d8837-03d3-432d-827f-b4ae845f78e2 13954711 2 2021-12-14 09:08:20 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00470d3f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-12-14 09:08:24 +0000 UTC,LastTransitionTime:2021-12-14 09:08:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-12-14 09:08:24 +0000 UTC,LastTransitionTime:2021-12-14 09:08:20 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Dec 14 09:08:24.962: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-5176 1d8056c7-80de-434e-889e-5e840529037d 13954708 1 2021-12-14 09:08:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 694d8837-03d3-432d-827f-b4ae845f78e2 0xc00470d8b0 0xc00470d8b1}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"694d8837-03d3-432d-827f-b4ae845f78e2\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00470d948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:08:24.962: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 14 09:08:24.962: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-5176 107d2e76-0869-4e2e-b6a7-3bcc1c8cb27d 13954700 2 2021-12-14 09:08:20 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 694d8837-03d3-432d-827f-b4ae845f78e2 0xc00470d787 0xc00470d788}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"694d8837-03d3-432d-827f-b4ae845f78e2\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00470d838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:08:24.966: INFO: Pod "test-recreate-deployment-85d47dcb4-clmvd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-clmvd test-recreate-deployment-85d47dcb4- deployment-5176 e133b8b8-04d2-4393-808d-4a819fbdacb0 13954712 0 2021-12-14 09:08:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 1d8056c7-80de-434e-889e-5e840529037d 0xc0042e7f80 0xc0042e7f81}] [] [{kube-controller-manager Update v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d8056c7-80de-434e-889e-5e840529037d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4bjqr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bjqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:08:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:08:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:08:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:,StartTime:2021-12-14 09:08:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:24.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5176" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":31,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:22.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 14 09:08:22.701: INFO: Waiting up to 5m0s for pod "pod-3f9324f2-97b0-410a-8f92-781297a91217" in namespace "emptydir-7098" to be "Succeeded or Failed" Dec 14 09:08:22.705: INFO: Pod "pod-3f9324f2-97b0-410a-8f92-781297a91217": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541624ms Dec 14 09:08:24.711: INFO: Pod "pod-3f9324f2-97b0-410a-8f92-781297a91217": Phase="Running", Reason="", readiness=true. Elapsed: 2.009496137s Dec 14 09:08:26.716: INFO: Pod "pod-3f9324f2-97b0-410a-8f92-781297a91217": Phase="Running", Reason="", readiness=true. Elapsed: 4.01449577s Dec 14 09:08:28.721: INFO: Pod "pod-3f9324f2-97b0-410a-8f92-781297a91217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019747186s STEP: Saw pod success Dec 14 09:08:28.721: INFO: Pod "pod-3f9324f2-97b0-410a-8f92-781297a91217" satisfied condition "Succeeded or Failed" Dec 14 09:08:28.725: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-3f9324f2-97b0-410a-8f92-781297a91217 container test-container: STEP: delete the pod Dec 14 09:08:28.745: INFO: Waiting for pod pod-3f9324f2-97b0-410a-8f92-781297a91217 to disappear Dec 14 09:08:28.752: INFO: Pod pod-3f9324f2-97b0-410a-8f92-781297a91217 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:28.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7098" for this suite. • [SLOW TEST:6.101 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":741,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:28.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Dec 14 09:08:28.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1265 0aa53356-df23-434a-8fe1-eb3f525177bc 13954794 0 2021-12-14 09:08:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-12-14 09:08:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:08:28.836: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1265 0aa53356-df23-434a-8fe1-eb3f525177bc 13954795 0 2021-12-14 09:08:28 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-12-14 09:08:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:28.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1265" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":36,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:28.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-3563/configmap-test-a1a00147-4caa-4179-810c-3795693a6d90 STEP: Creating a pod to test consume configMaps Dec 14 09:08:29.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b" in namespace "configmap-3563" to be "Succeeded or Failed" Dec 14 09:08:29.018: INFO: Pod "pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300822ms Dec 14 09:08:31.023: INFO: Pod "pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009638222s STEP: Saw pod success Dec 14 09:08:31.024: INFO: Pod "pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b" satisfied condition "Succeeded or Failed" Dec 14 09:08:31.027: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b container env-test: STEP: delete the pod Dec 14 09:08:31.048: INFO: Waiting for pod pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b to disappear Dec 14 09:08:31.053: INFO: Pod pod-configmaps-577f87d4-6d32-4b5c-a795-12990f21cd2b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:31.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3563" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":784,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:25.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-2a98afe9-e11b-42de-bcfc-090bbd995256 STEP: Creating configMap with name cm-test-opt-upd-2cdfcbec-54e5-4370-bd06-8c40997d25fc STEP: Creating the pod Dec 14 09:08:25.087: INFO: The status of Pod pod-projected-configmaps-75c6807c-75e2-4871-816f-7b5090f5844e is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:08:27.091: INFO: The status of Pod pod-projected-configmaps-75c6807c-75e2-4871-816f-7b5090f5844e is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:08:29.093: INFO: The status of Pod pod-projected-configmaps-75c6807c-75e2-4871-816f-7b5090f5844e is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-2a98afe9-e11b-42de-bcfc-090bbd995256 STEP: Updating configmap cm-test-opt-upd-2cdfcbec-54e5-4370-bd06-8c40997d25fc STEP: Creating configMap with name cm-test-opt-create-483f55c9-16e1-413c-b6ee-5016c0265786 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:33.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5790" for this suite. • [SLOW TEST:8.166 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:33.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:08:33.941: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:08:36.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:08:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:08:40.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5368" for this suite. STEP: Destroying namespace "webhook-5368-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.795 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":33,"skipped":656,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:20.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-1460" for this suite. • [SLOW TEST:100.071 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":20,"skipped":291,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Dec 14 09:09:01.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3787 cluster-info' Dec 14 09:09:01.185: INFO: stderr: "" Dec 14 09:09:01.185: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.25.0.6:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:01.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3787" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":21,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:55.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:01.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-2621" for this suite. • [SLOW TEST:66.058 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":41,"skipped":763,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:01.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:01.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1907 version' Dec 14 09:09:01.765: INFO: stderr: "" Dec 14 09:09:01.765: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:38:50Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.0\", GitCommit:\"c2b5237ccd9c0f1d600d3072634ca66cefdf272f\", GitTreeState:\"clean\", BuildDate:\"2021-08-04T20:01:24Z\", GoVersion:\"go1.16.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:01.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1907" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":42,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:40.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-jvbl STEP: Creating a pod to test atomic-volume-subpath Dec 14 09:08:40.267: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jvbl" in namespace "subpath-3673" to be "Succeeded or Failed" Dec 14 09:08:40.277: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267082ms Dec 14 09:08:42.281: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013899231s Dec 14 09:08:44.287: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 4.019575808s Dec 14 09:08:46.292: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 6.024778348s Dec 14 09:08:48.298: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 8.03023321s Dec 14 09:08:50.304: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 10.03640255s Dec 14 09:08:52.309: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 12.041842371s Dec 14 09:08:54.316: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 14.048372195s Dec 14 09:08:56.322: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 16.054393164s Dec 14 09:08:58.327: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 18.059835364s Dec 14 09:09:00.333: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 20.065749005s Dec 14 09:09:02.338: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Running", Reason="", readiness=true. Elapsed: 22.070764381s Dec 14 09:09:04.344: INFO: Pod "pod-subpath-test-projected-jvbl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076595233s STEP: Saw pod success Dec 14 09:09:04.344: INFO: Pod "pod-subpath-test-projected-jvbl" satisfied condition "Succeeded or Failed" Dec 14 09:09:04.348: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-subpath-test-projected-jvbl container test-container-subpath-projected-jvbl: STEP: delete the pod Dec 14 09:09:04.365: INFO: Waiting for pod pod-subpath-test-projected-jvbl to disappear Dec 14 09:09:04.368: INFO: Pod pod-subpath-test-projected-jvbl no longer exists STEP: Deleting pod pod-subpath-test-projected-jvbl Dec 14 09:09:04.368: INFO: Deleting pod "pod-subpath-test-projected-jvbl" in namespace "subpath-3673" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3673" for this suite. • [SLOW TEST:24.164 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":672,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:07:38.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-a9180a39-255d-42ab-998e-908ab682acca STEP: Creating configMap with name cm-test-opt-upd-b83cb8df-ca47-480e-a618-38772cd5449d STEP: Creating the pod Dec 14 09:07:38.066: INFO: The status of Pod pod-configmaps-30a792ab-a95f-4379-a2e9-40ac13a662c3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:07:40.071: INFO: The status of Pod pod-configmaps-30a792ab-a95f-4379-a2e9-40ac13a662c3 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:07:42.071: INFO: The status of Pod pod-configmaps-30a792ab-a95f-4379-a2e9-40ac13a662c3 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-a9180a39-255d-42ab-998e-908ab682acca STEP: Updating configmap cm-test-opt-upd-b83cb8df-ca47-480e-a618-38772cd5449d STEP: Creating configMap with name cm-test-opt-create-d0bfc403-21d4-43ea-9b1a-f8ff403daedf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:04.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9356" for this suite. • [SLOW TEST:86.808 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":678,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:01.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-1edb1755-2d48-4f4d-876d-eb486ad2fb22 STEP: Creating a pod to test consume configMaps Dec 14 09:09:01.319: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2" in namespace "projected-3154" to be "Succeeded or Failed" Dec 14 09:09:01.322: INFO: Pod "pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939913ms Dec 14 09:09:03.327: INFO: Pod "pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008102872s Dec 14 09:09:05.333: INFO: Pod "pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01411458s STEP: Saw pod success Dec 14 09:09:05.333: INFO: Pod "pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2" satisfied condition "Succeeded or Failed" Dec 14 09:09:05.337: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2 container projected-configmap-volume-test: STEP: delete the pod Dec 14 09:09:05.352: INFO: Waiting for pod pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2 to disappear Dec 14 09:09:05.355: INFO: Pod pod-projected-configmaps-575b1c8b-1c64-419c-b07e-1397c7bc5de2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:05.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3154" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:31.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Dec 14 09:08:33.138: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3274 PodName:var-expansion-9ebac612-5163-4938-af6c-da31d9ec6f4a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:08:33.138: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Dec 14 09:08:33.422: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3274 PodName:var-expansion-9ebac612-5163-4938-af6c-da31d9ec6f4a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:08:33.423: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Dec 14 09:08:34.046: INFO: Successfully updated pod "var-expansion-9ebac612-5163-4938-af6c-da31d9ec6f4a" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Dec 14 09:08:34.049: INFO: Deleting pod "var-expansion-9ebac612-5163-4938-af6c-da31d9ec6f4a" in namespace "var-expansion-3274" Dec 14 09:08:34.054: INFO: Wait up to 5m0s for pod "var-expansion-9ebac612-5163-4938-af6c-da31d9ec6f4a" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3274" for this suite. • [SLOW TEST:36.996 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":38,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:04.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-647dcf61-9d5b-40b9-92a0-8326fa0dc830 STEP: Creating a pod to test consume secrets Dec 14 09:09:04.454: INFO: Waiting up to 5m0s for pod "pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615" in namespace "secrets-1798" to be "Succeeded or Failed" Dec 14 09:09:04.457: INFO: Pod "pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615": Phase="Pending", Reason="", readiness=false. Elapsed: 3.582979ms Dec 14 09:09:06.462: INFO: Pod "pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008717818s Dec 14 09:09:08.468: INFO: Pod "pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013938003s STEP: Saw pod success Dec 14 09:09:08.468: INFO: Pod "pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615" satisfied condition "Succeeded or Failed" Dec 14 09:09:08.471: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615 container secret-volume-test: STEP: delete the pod Dec 14 09:09:08.487: INFO: Waiting for pod pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615 to disappear Dec 14 09:09:08.491: INFO: Pod pod-secrets-dfd15526-5e78-438d-aad8-b2e176de2615 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:08.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1798" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":679,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:05.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:09:05.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:09:08.823: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:08.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8189" for this suite. STEP: Destroying namespace "webhook-8189-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":23,"skipped":349,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:04.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Dec 14 09:09:04.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8820 create -f -' Dec 14 09:09:05.210: INFO: stderr: "" Dec 14 09:09:05.210: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Dec 14 09:09:06.213: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:06.213: INFO: Found 0 / 1 Dec 14 09:09:07.215: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:07.215: INFO: Found 0 / 1 Dec 14 09:09:08.215: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:08.215: INFO: Found 0 / 1 Dec 14 09:09:09.214: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:09.215: INFO: Found 1 / 1 Dec 14 09:09:09.215: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 14 09:09:09.218: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:09.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 14 09:09:09.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8820 patch pod agnhost-primary-wdl5k -p {"metadata":{"annotations":{"x":"y"}}}' Dec 14 09:09:09.332: INFO: stderr: "" Dec 14 09:09:09.332: INFO: stdout: "pod/agnhost-primary-wdl5k patched\n" STEP: checking annotations Dec 14 09:09:09.336: INFO: Selector matched 1 pods for map[app:agnhost] Dec 14 09:09:09.336: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:09.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8820" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":34,"skipped":680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:02.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:13.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2129" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":43,"skipped":885,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:08.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Dec 14 09:09:08.189: INFO: Waiting up to 5m0s for pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe" in namespace "security-context-4630" to be "Succeeded or Failed" Dec 14 09:09:08.192: INFO: Pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108793ms Dec 14 09:09:10.197: INFO: Pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505144s Dec 14 09:09:12.201: INFO: Pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012388447s Dec 14 09:09:14.206: INFO: Pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016560092s STEP: Saw pod success Dec 14 09:09:14.206: INFO: Pod "security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe" satisfied condition "Succeeded or Failed" Dec 14 09:09:14.210: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe container test-container: STEP: delete the pod Dec 14 09:09:14.225: INFO: Waiting for pod security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe to disappear Dec 14 09:09:14.228: INFO: Pod security-context-82bd492e-f3a1-4fc8-bfa7-f95a25728bbe no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:14.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4630" for this suite. • [SLOW TEST:6.097 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:08.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-31bcc142-bec4-40fb-8016-5b111b53bddc STEP: Creating a pod to test consume configMaps Dec 14 09:09:08.586: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c" in namespace "projected-78" to be "Succeeded or Failed" Dec 14 09:09:08.589: INFO: Pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178887ms Dec 14 09:09:10.594: INFO: Pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00763678s Dec 14 09:09:12.599: INFO: Pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01275403s Dec 14 09:09:14.604: INFO: Pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018135184s STEP: Saw pod success Dec 14 09:09:14.604: INFO: Pod "pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c" satisfied condition "Succeeded or Failed" Dec 14 09:09:14.608: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c container agnhost-container: STEP: delete the pod Dec 14 09:09:14.623: INFO: Waiting for pod pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c to disappear Dec 14 09:09:14.626: INFO: Pod pod-projected-configmaps-5fc0c5e1-a56e-4e57-878c-76128792f43c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:14.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-78" for this suite. • [SLOW TEST:6.094 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":692,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:08.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Dec 14 09:09:08.997: INFO: Waiting up to 5m0s for pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2" in namespace "downward-api-9469" to be "Succeeded or Failed" Dec 14 09:09:09.001: INFO: Pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.492316ms Dec 14 09:09:11.005: INFO: Pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007793304s Dec 14 09:09:13.010: INFO: Pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012859s Dec 14 09:09:15.016: INFO: Pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018741043s STEP: Saw pod success Dec 14 09:09:15.016: INFO: Pod "downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2" satisfied condition "Succeeded or Failed" Dec 14 09:09:15.020: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2 container dapi-container: STEP: delete the pod Dec 14 09:09:15.040: INFO: Waiting for pod downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2 to disappear Dec 14 09:09:15.043: INFO: Pod downward-api-80c16a3e-4f13-48e3-a803-6a898104e3f2 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:15.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9469" for this suite. • [SLOW TEST:6.097 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":363,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:09.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:09:09.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579" in namespace "projected-7605" to be "Succeeded or Failed" Dec 14 09:09:09.464: INFO: Pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891658ms Dec 14 09:09:11.468: INFO: Pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007157058s Dec 14 09:09:13.473: INFO: Pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01194028s Dec 14 09:09:15.478: INFO: Pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017169901s STEP: Saw pod success Dec 14 09:09:15.478: INFO: Pod "downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579" satisfied condition "Succeeded or Failed" Dec 14 09:09:15.482: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579 container client-container: STEP: delete the pod Dec 14 09:09:15.499: INFO: Waiting for pod downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579 to disappear Dec 14 09:09:15.503: INFO: Pod downwardapi-volume-647c99c8-6c13-4a9d-af17-d855b013c579 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:15.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7605" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:13.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Dec 14 09:09:13.294: INFO: Waiting up to 5m0s for pod "client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5" in namespace "containers-2405" to be "Succeeded or Failed" Dec 14 09:09:13.298: INFO: Pod "client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275854ms Dec 14 09:09:15.303: INFO: Pod "client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008807338s Dec 14 09:09:17.307: INFO: Pod "client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013069738s STEP: Saw pod success Dec 14 09:09:17.308: INFO: Pod "client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5" satisfied condition "Succeeded or Failed" Dec 14 09:09:17.311: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5 container agnhost-container: STEP: delete the pod Dec 14 09:09:17.324: INFO: Waiting for pod client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5 to disappear Dec 14 09:09:17.327: INFO: Pod client-containers-bf81327c-ec76-4c6e-bba2-a7593ad7ead5 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:17.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2405" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":895,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:17.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8292" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":45,"skipped":904,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:17.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Dec 14 09:09:17.483: INFO: Waiting up to 5m0s for pod "security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f" in namespace "security-context-7306" to be "Succeeded or Failed" Dec 14 09:09:17.486: INFO: Pod "security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896677ms Dec 14 09:09:19.489: INFO: Pod "security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006783799s Dec 14 09:09:21.495: INFO: Pod "security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012120848s STEP: Saw pod success Dec 14 09:09:21.495: INFO: Pod "security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f" satisfied condition "Succeeded or Failed" Dec 14 09:09:21.498: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f container test-container: STEP: delete the pod Dec 14 09:09:21.514: INFO: Waiting for pod security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f to disappear Dec 14 09:09:21.518: INFO: Pod security-context-dce248ac-7823-4a07-82a5-06a5c6e9330f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:21.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7306" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":46,"skipped":914,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:15.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:15.691: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Dec 14 09:09:19.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 --namespace=crd-publish-openapi-2071 create -f -' Dec 14 09:09:19.805: INFO: stderr: "" Dec 14 09:09:19.805: INFO: stdout: "e2e-test-crd-publish-openapi-4373-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 14 09:09:19.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 --namespace=crd-publish-openapi-2071 delete e2e-test-crd-publish-openapi-4373-crds test-cr' Dec 14 09:09:19.913: INFO: stderr: "" Dec 14 09:09:19.913: INFO: stdout: "e2e-test-crd-publish-openapi-4373-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Dec 14 09:09:19.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 --namespace=crd-publish-openapi-2071 apply -f -' Dec 14 09:09:20.145: INFO: stderr: "" Dec 14 09:09:20.145: INFO: stdout: "e2e-test-crd-publish-openapi-4373-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Dec 14 09:09:20.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 --namespace=crd-publish-openapi-2071 delete e2e-test-crd-publish-openapi-4373-crds test-cr' Dec 14 09:09:20.255: INFO: stderr: "" Dec 14 09:09:20.255: INFO: stdout: "e2e-test-crd-publish-openapi-4373-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Dec 14 09:09:20.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2071 explain e2e-test-crd-publish-openapi-4373-crds' Dec 14 09:09:20.483: INFO: stderr: "" Dec 14 09:09:20.484: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4373-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:24.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2071" for this suite. • [SLOW TEST:8.564 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":36,"skipped":774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:15.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:15.105: INFO: Creating deployment "webserver-deployment" Dec 14 09:09:15.108: INFO: Waiting for observed generation 1 Dec 14 09:09:17.115: INFO: Waiting for all required pods to come up Dec 14 09:09:17.121: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 14 09:09:23.130: INFO: Waiting for deployment "webserver-deployment" to complete Dec 14 09:09:23.137: INFO: Updating deployment "webserver-deployment" with a non-existent image Dec 14 09:09:23.146: INFO: Updating deployment webserver-deployment Dec 14 09:09:23.146: INFO: Waiting for observed generation 2 Dec 14 09:09:25.152: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 14 09:09:25.156: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 14 09:09:25.161: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 14 09:09:25.170: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 14 09:09:25.170: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 14 09:09:25.173: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Dec 14 09:09:25.178: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Dec 14 09:09:25.178: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Dec 14 09:09:25.185: INFO: Updating deployment webserver-deployment Dec 14 09:09:25.185: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Dec 14 09:09:25.190: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 14 09:09:25.192: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:09:25.198: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8567 5b7e6d38-b15d-4715-b06a-37a3310d944b 13956055 3 2021-12-14 09:09:15 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004decf58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-12-14 09:09:23 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-12-14 09:09:25 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Dec 14 09:09:25.202: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8567 6d6c61fb-426b-4923-af7f-b1a96f1b6674 13956049 3 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5b7e6d38-b15d-4715-b06a-37a3310d944b 0xc00419fc47 0xc00419fc48}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b7e6d38-b15d-4715-b06a-37a3310d944b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00419fd38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:09:25.203: INFO: All old ReplicaSets of Deployment "webserver-deployment": Dec 14 09:09:25.203: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-8567 41748267-a74f-4d46-90b4-630b6c086542 13956046 3 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5b7e6d38-b15d-4715-b06a-37a3310d944b 0xc00419ff57 0xc00419ff58}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b7e6d38-b15d-4715-b06a-37a3310d944b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00419ffe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:09:25.227: INFO: Pod "webserver-deployment-795d758f88-7wmld" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7wmld webserver-deployment-795d758f88- deployment-8567 07d98d2d-d0bd-4481-ae8a-2f97f1d70d90 13956063 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004ded357 0xc004ded358}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dch2z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dch2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.227: INFO: Pod "webserver-deployment-795d758f88-94m8x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-94m8x webserver-deployment-795d758f88- deployment-8567 cdb7a07a-b551-49ce-867b-da1842dd1cd4 13956082 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004ded4c7 0xc004ded4c8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d8nz9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8nz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.227: INFO: Pod "webserver-deployment-795d758f88-fz5xm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fz5xm webserver-deployment-795d758f88- deployment-8567 6c807093-13fb-4176-abca-186ec4702604 13956083 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004ded620 0xc004ded621}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v2n5q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2n5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.228: INFO: Pod "webserver-deployment-795d758f88-g2q5v" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-g2q5v webserver-deployment-795d758f88- deployment-8567 c9eb6afc-4b4c-497a-b5a9-51323c12ea05 13956029 0 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004ded770 0xc004ded771}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7rbxc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rbxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:,StartTime:2021-12-14 09:09:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.228: INFO: Pod "webserver-deployment-795d758f88-g5d2b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-g5d2b webserver-deployment-795d758f88- deployment-8567 13e6a5cc-32e3-4bcb-ada6-46cf15fe9243 13955979 0 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004ded947 0xc004ded948}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hk8gv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hk8gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:,StartTime:2021-12-14 09:09:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.229: INFO: Pod "webserver-deployment-795d758f88-g5fj2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-g5fj2 webserver-deployment-795d758f88- deployment-8567 44282993-a212-4c1e-afac-7be1796ffb60 13956081 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004dedb27 0xc004dedb28}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pfmwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfmwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.229: INFO: Pod "webserver-deployment-795d758f88-j2hkx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j2hkx webserver-deployment-795d758f88- deployment-8567 d0daa646-20f6-4778-aa73-303f2f0690c1 13956084 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004dedc97 0xc004dedc98}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gszb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gszb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.229: INFO: Pod "webserver-deployment-795d758f88-jkntr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jkntr webserver-deployment-795d758f88- deployment-8567 204e69d8-c454-4afb-ac16-7b31f7084023 13956086 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004deddf0 0xc004deddf1}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4d7v6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d7v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.230: INFO: Pod "webserver-deployment-795d758f88-kss7m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kss7m webserver-deployment-795d758f88- deployment-8567 3a8d8c39-0cca-4d98-9584-4dfbcddd66de 13955981 0 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc004dedf57 0xc004dedf58}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fp7rp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.230: INFO: Pod "webserver-deployment-795d758f88-lfc5j" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lfc5j webserver-deployment-795d758f88- deployment-8567 d8153b0a-1d5c-4518-abbc-b7ef1094d700 13956085 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc005a020c7 0xc005a020c8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vbt9r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbt9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.230: INFO: Pod "webserver-deployment-795d758f88-r6rjc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r6rjc webserver-deployment-795d758f88- deployment-8567 cb31255f-3f45-4324-9da5-29647b3ab34e 13956028 0 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc005a02220 0xc005a02221}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ddjfn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddjfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.96,StartTime:2021-12-14 09:09:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.230: INFO: Pod "webserver-deployment-795d758f88-w9f4f" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w9f4f webserver-deployment-795d758f88- deployment-8567 59a0abf7-0436-410d-ac97-573e30353a63 13955967 0 2021-12-14 09:09:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 6d6c61fb-426b-4923-af7f-b1a96f1b6674 0xc005a02427 0xc005a02428}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d6c61fb-426b-4923-af7f-b1a96f1b6674\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p9bgf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p9bgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:,StartTime:2021-12-14 09:09:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.231: INFO: Pod "webserver-deployment-847dcfb7fb-2st4c" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2st4c webserver-deployment-847dcfb7fb- deployment-8567 dc6b73f9-c760-4d1a-82d7-6a0bd1be9f04 13956074 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02607 0xc005a02608}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mpdrk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mpdrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.231: INFO: Pod "webserver-deployment-847dcfb7fb-5tb2c" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5tb2c webserver-deployment-847dcfb7fb- deployment-8567 1dedd600-b74b-4c3c-b31d-70ba9b3aab3b 13956054 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02750 0xc005a02751}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mn7g9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn7g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.231: INFO: Pod "webserver-deployment-847dcfb7fb-7s264" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7s264 webserver-deployment-847dcfb7fb- deployment-8567 975ca390-d95e-4d6e-956c-fa4d378250b5 13955819 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a028a7 0xc005a028a8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6mvv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mvv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.91,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://79f47de56923fce38164149a678b4392ce24a02c138bc3c1bdb5b9af0bd82d1d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.232: INFO: Pod "webserver-deployment-847dcfb7fb-7vj8v" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7vj8v webserver-deployment-847dcfb7fb- deployment-8567 8bf7de31-8258-49b7-8ad4-f3956b988bde 13956075 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02a87 0xc005a02a88}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s5xg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5xg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.232: INFO: Pod "webserver-deployment-847dcfb7fb-9kpjl" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9kpjl webserver-deployment-847dcfb7fb- deployment-8567 7336d88d-e63f-4a9a-be0d-48bd6db4f83e 13956068 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02bd0 0xc005a02bd1}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8gvdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8gvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.232: INFO: Pod "webserver-deployment-847dcfb7fb-9rnbt" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9rnbt webserver-deployment-847dcfb7fb- deployment-8567 55219260-e809-409f-be36-77c480d206ba 13955825 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02d37 0xc005a02d38}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g9np8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g9np8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.15,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6398053ac6ea53699cd7062e72fbcd344cc6e2b4d3acc0c7d3d6dcb244eca9b7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.232: INFO: Pod "webserver-deployment-847dcfb7fb-gz8x7" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gz8x7 webserver-deployment-847dcfb7fb- deployment-8567 fa9ebd64-2da5-4466-b422-ac80f457640d 13955905 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a02f17 0xc005a02f18}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-29c7j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29c7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.92,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8a12bbaf629f97fd77134e8afe11ab2ba4c17c77f49a16ddc030976c083c0b8e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.233: INFO: Pod "webserver-deployment-847dcfb7fb-jl2kx" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jl2kx webserver-deployment-847dcfb7fb- deployment-8567 653492ed-4105-44b7-a939-2636bb06b17d 13955921 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a030f7 0xc005a030f8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.95\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jx6dh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jx6dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.95,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://af6d4b0b641757611f8bcedbd5489273279ceef02c2e24876c73df001d9ddac6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.233: INFO: Pod "webserver-deployment-847dcfb7fb-ktqf4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ktqf4 webserver-deployment-847dcfb7fb- deployment-8567 a2d01b5a-cf79-472a-acd3-32f1536aa4ce 13955842 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a032d7 0xc005a032d8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hz989,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hz989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.16,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://a9935fd98943cd259d2f7f0641ccd4ee11afe21bad3b6ed92c020dac38622f11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.233: INFO: Pod "webserver-deployment-847dcfb7fb-n47b9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-n47b9 webserver-deployment-847dcfb7fb- deployment-8567 07500791-a6ce-49e9-882b-a226020140fb 13955811 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a034b7 0xc005a034b8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rlxsq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlxsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.93,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4e148d7f36bd30ab59bc678eaeac00a0d77763f82dad8e2c06595ba90a753dba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.233: INFO: Pod "webserver-deployment-847dcfb7fb-nf9hf" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nf9hf webserver-deployment-847dcfb7fb- deployment-8567 06eb8427-3841-479d-868a-06498bf68fad 13956062 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03697 0xc005a03698}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gtxls,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gtxls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.234: INFO: Pod "webserver-deployment-847dcfb7fb-p97gz" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-p97gz webserver-deployment-847dcfb7fb- deployment-8567 e25f332f-4333-4965-bbfe-6b5e65fdced2 13956080 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a037f7 0xc005a037f8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xtjhm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtjhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.234: INFO: Pod "webserver-deployment-847dcfb7fb-slslr" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-slslr webserver-deployment-847dcfb7fb- deployment-8567 013e2967-416c-4b3f-a945-86ee4caf72f7 13956079 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03967 0xc005a03968}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g89xt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g89xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.234: INFO: Pod "webserver-deployment-847dcfb7fb-snn84" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-snn84 webserver-deployment-847dcfb7fb- deployment-8567 179d7847-38e2-457d-a8e0-d6510e5b2e6b 13956073 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03ab0 0xc005a03ab1}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6xddj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6xddj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.234: INFO: Pod "webserver-deployment-847dcfb7fb-sqrqx" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-sqrqx webserver-deployment-847dcfb7fb- deployment-8567 96f4c832-21f1-4a11-8894-a4c726ac0aee 13955916 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03bf0 0xc005a03bf1}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9n8f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9n8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.10,PodIP:192.168.1.94,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://1b57573a3b996fffdbacb5b54b6cb63a243cdca582b8fcadbbf140bd75cde728,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.234: INFO: Pod "webserver-deployment-847dcfb7fb-t8ldl" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-t8ldl webserver-deployment-847dcfb7fb- deployment-8567 bc24d498-feb8-4330-a9f9-caad47db2346 13956071 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03dc7 0xc005a03dc8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kkqzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kkqzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.235: INFO: Pod "webserver-deployment-847dcfb7fb-tl6m4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tl6m4 webserver-deployment-847dcfb7fb- deployment-8567 bb5248eb-7afd-4255-a718-e325916e8e4c 13955859 0 2021-12-14 09:09:15 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc005a03f57 0xc005a03f58}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lp7mm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lp7mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.19,StartTime:2021-12-14 09:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://7e6c5d5185032a077fe70311f8f33ed723c09464244e472bbbd585314044805d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.235: INFO: Pod "webserver-deployment-847dcfb7fb-wzdjm" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-wzdjm webserver-deployment-847dcfb7fb- deployment-8567 6545dde2-f9a3-4119-9f17-8175b9c63e13 13956076 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc00523c137 0xc00523c138}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6p6k6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6p6k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.235: INFO: Pod "webserver-deployment-847dcfb7fb-xd2r9" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xd2r9 webserver-deployment-847dcfb7fb- deployment-8567 52473ae6-f44f-45e9-aef4-0e9c185416f3 13956060 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc00523c280 0xc00523c281}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vqd94,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqd94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 14 09:09:25.236: INFO: Pod "webserver-deployment-847dcfb7fb-zdszs" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zdszs webserver-deployment-847dcfb7fb- deployment-8567 c6179c53-e7dd-41ad-b22c-e65e51e8b4f7 13956070 0 2021-12-14 09:09:25 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 41748267-a74f-4d46-90b4-630b6c086542 0xc00523c3d7 0xc00523c3d8}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41748267-a74f-4d46-90b4-630b6c086542\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6b9w7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6b9w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-c846h,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:25.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8567" for this suite. • [SLOW TEST:10.173 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":25,"skipped":369,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:25.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:30.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-686" for this suite. • [SLOW TEST:5.456 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":26,"skipped":371,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:21.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Dec 14 09:09:21.600: INFO: Waiting up to 5m0s for pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3" in namespace "downward-api-1133" to be "Succeeded or Failed" Dec 14 09:09:21.603: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98623ms Dec 14 09:09:23.609: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008571153s Dec 14 09:09:25.613: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013021268s Dec 14 09:09:27.618: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017909381s Dec 14 09:09:29.623: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022867447s Dec 14 09:09:31.627: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.027144459s Dec 14 09:09:33.632: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.031901277s STEP: Saw pod success Dec 14 09:09:33.632: INFO: Pod "downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3" satisfied condition "Succeeded or Failed" Dec 14 09:09:33.636: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3 container dapi-container: STEP: delete the pod Dec 14 09:09:33.653: INFO: Waiting for pod downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3 to disappear Dec 14 09:09:33.662: INFO: Pod downward-api-1dc9d0e2-d621-4042-b81e-ada07c377de3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:33.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1133" for this suite. • [SLOW TEST:12.105 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":927,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:24.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:09:25.160: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Dec 14 09:09:27.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:29.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:31.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:33.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069765, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:09:36.189: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Dec 14 09:09:36.211: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:36.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2758" for this suite. STEP: Destroying namespace "webhook-2758-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.859 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":37,"skipped":857,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:30.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Dec 14 09:09:30.767: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:32.772: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:34.771: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:36.771: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:38.771: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:40.771: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:42.772: INFO: The status of Pod annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681 is Running (Ready = true) Dec 14 09:09:43.296: INFO: Successfully updated pod "annotationupdateff46e6be-b9a0-44bd-9053-a0201d7b7681" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:45.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8367" for this suite. • [SLOW TEST:14.596 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:45.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Dec 14 09:09:45.458: INFO: starting watch STEP: patching STEP: updating Dec 14 09:09:45.468: INFO: waiting for watch events with expected annotations Dec 14 09:09:45.468: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:45.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-1412" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":28,"skipped":402,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:14.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:14.720: INFO: Pod name rollover-pod: Found 0 pods out of 1 Dec 14 09:09:19.724: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 14 09:09:21.731: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Dec 14 09:09:23.735: INFO: Creating deployment "test-rollover-deployment" Dec 14 09:09:23.743: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Dec 14 09:09:25.751: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Dec 14 09:09:25.757: INFO: Ensure that both replica sets have 1 created replica Dec 14 09:09:25.763: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Dec 14 09:09:25.772: INFO: Updating deployment test-rollover-deployment Dec 14 09:09:25.772: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Dec 14 09:09:27.780: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Dec 14 09:09:27.788: INFO: Make sure deployment "test-rollover-deployment" is complete Dec 14 09:09:27.796: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:27.796: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069766, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:29.805: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:29.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069766, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:31.805: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:31.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069766, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:33.805: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:33.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069766, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:35.804: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:35.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:37.804: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:37.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:39.806: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:39.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:41.805: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:41.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:43.804: INFO: all replica sets need to contain the pod-template-hash label Dec 14 09:09:43.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069774, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069763, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:09:45.806: INFO: Dec 14 09:09:45.806: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Dec 14 09:09:45.814: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9705 9dfb675b-0b54-470c-b448-9ee10ce827d9 13956691 2 2021-12-14 09:09:23 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b8f5f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-12-14 09:09:23 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2021-12-14 09:09:44 +0000 UTC,LastTransitionTime:2021-12-14 09:09:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Dec 14 09:09:45.819: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9705 bf7dfb70-0672-44b6-b02d-4fe2c08143fe 13956681 2 2021-12-14 09:09:25 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9dfb675b-0b54-470c-b448-9ee10ce827d9 0xc006b8fbb0 0xc006b8fbb1}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9dfb675b-0b54-470c-b448-9ee10ce827d9\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b8fc48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:09:45.819: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Dec 14 09:09:45.819: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9705 b9ab64a0-ff3b-4793-96ff-3deaf8b4b91a 13956690 2 2021-12-14 09:09:14 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9dfb675b-0b54-470c-b448-9ee10ce827d9 0xc006b8f95f 0xc006b8f970}] [] [{e2e.test Update apps/v1 2021-12-14 09:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9dfb675b-0b54-470c-b448-9ee10ce827d9\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:44 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006b8fa28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:09:45.819: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9705 2c1d2429-f024-451e-8e7c-ec632b446619 13956202 2 2021-12-14 09:09:23 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9dfb675b-0b54-470c-b448-9ee10ce827d9 0xc006b8fa97 0xc006b8fa98}] [] [{kube-controller-manager Update apps/v1 2021-12-14 09:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9dfb675b-0b54-470c-b448-9ee10ce827d9\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2021-12-14 09:09:26 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006b8fb48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 14 09:09:45.823: INFO: Pod "test-rollover-deployment-98c5f4599-jg457" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-jg457 test-rollover-deployment-98c5f4599- deployment-9705 55b953fc-8126-4e1a-b68c-998911348d36 13956517 0 2021-12-14 09:09:26 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 bf7dfb70-0672-44b6-b02d-4fe2c08143fe 0xc004c5e2f0 0xc004c5e2f1}] [] [{kube-controller-manager Update v1 2021-12-14 09:09:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf7dfb70-0672-44b6-b02d-4fe2c08143fe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-12-14 09:09:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-74n78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74n78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capi-v1.22-md-0-698f477975-vkd62,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-12-14 09:09:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.25.0.9,PodIP:192.168.2.27,StartTime:2021-12-14 09:09:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-12-14 09:09:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://259762707da9ccd48fc0ef192bd61429ece52999c56e1968e2d1216dce068157,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:45.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9705" for this suite. • [SLOW TEST:31.151 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":37,"skipped":709,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":39,"skipped":812,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:14.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:14.289: INFO: created pod Dec 14 09:09:14.289: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4168" to be "Succeeded or Failed" Dec 14 09:09:14.292: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.011675ms Dec 14 09:09:16.298: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008237205s Dec 14 09:09:18.303: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013250587s STEP: Saw pod success Dec 14 09:09:18.303: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Dec 14 09:09:48.304: INFO: polling logs Dec 14 09:09:48.311: INFO: Pod logs: 2021/12/14 09:09:15 OK: Got token 2021/12/14 09:09:15 validating with in-cluster discovery 2021/12/14 09:09:15 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/12/14 09:09:15 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4168:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1639473554, NotBefore:1639472954, IssuedAt:1639472954, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4168", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9abb05d6-82c0-4ee6-b351-d33a6c3e8cfe"}}} 2021/12/14 09:09:15 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/12/14 09:09:15 OK: Validated signature on JWT 2021/12/14 09:09:15 OK: Got valid claims from token! 2021/12/14 09:09:15 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4168:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1639473554, NotBefore:1639472954, IssuedAt:1639472954, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4168", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9abb05d6-82c0-4ee6-b351-d33a6c3e8cfe"}}} Dec 14 09:09:48.311: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:48.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4168" for this suite. • [SLOW TEST:34.083 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":40,"skipped":812,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:45.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:09:46.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:09:49.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6548" for this suite. STEP: Destroying namespace "webhook-6548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":38,"skipped":717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:10.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-8834 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Dec 14 09:08:10.864: INFO: Found 0 stateful pods, waiting for 3 Dec 14 09:08:20.870: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:08:20.870: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:08:20.870: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 14 09:08:30.869: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:08:30.869: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:08:30.869: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Dec 14 09:08:30.905: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 14 09:08:40.946: INFO: Updating stateful set ss2 Dec 14 09:08:40.956: INFO: Waiting for Pod statefulset-8834/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Dec 14 09:08:51.020: INFO: Found 1 stateful pods, waiting for 3 Dec 14 09:09:01.026: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:09:01.026: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:09:01.026: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 14 09:09:01.051: INFO: Updating stateful set ss2 Dec 14 09:09:01.057: INFO: Waiting for Pod statefulset-8834/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Dec 14 09:09:11.085: INFO: Updating stateful set ss2 Dec 14 09:09:11.094: INFO: Waiting for StatefulSet statefulset-8834/ss2 to complete update Dec 14 09:09:11.095: INFO: Waiting for Pod statefulset-8834/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Dec 14 09:09:21.105: INFO: Waiting for StatefulSet statefulset-8834/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:09:31.104: INFO: Deleting all statefulset in ns statefulset-8834 Dec 14 09:09:31.108: INFO: Scaling statefulset ss2 to 0 Dec 14 09:09:51.124: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:09:51.128: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:51.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8834" for this suite. • [SLOW TEST:100.330 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":22,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:48.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:09:48.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4" in namespace "projected-163" to be "Succeeded or Failed" Dec 14 09:09:48.389: INFO: Pod "downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347807ms Dec 14 09:09:50.393: INFO: Pod "downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4": Phase="Running", Reason="", readiness=true. Elapsed: 2.007319269s Dec 14 09:09:52.397: INFO: Pod "downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011300473s STEP: Saw pod success Dec 14 09:09:52.397: INFO: Pod "downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4" satisfied condition "Succeeded or Failed" Dec 14 09:09:52.401: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4 container client-container: STEP: delete the pod Dec 14 09:09:52.417: INFO: Waiting for pod downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4 to disappear Dec 14 09:09:52.420: INFO: Pod downwardapi-volume-50e99c93-4845-4694-8b7c-dedb6aca7be4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:52.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-163" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:52.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Dec 14 09:09:52.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2677 create -f -' Dec 14 09:09:52.804: INFO: stderr: "" Dec 14 09:09:52.804: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Dec 14 09:09:52.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2677 diff -f -' Dec 14 09:09:53.029: INFO: rc: 1 Dec 14 09:09:53.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2677 delete -f -' Dec 14 09:09:53.131: INFO: stderr: "" Dec 14 09:09:53.131: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:53.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2677" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":42,"skipped":839,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:49.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-e53788b5-0681-4ff2-82a5-308c6ef5cd5f STEP: Creating a pod to test consume secrets Dec 14 09:09:49.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465" in namespace "projected-8801" to be "Succeeded or Failed" Dec 14 09:09:49.785: INFO: Pod "pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465": Phase="Pending", Reason="", readiness=false. Elapsed: 3.162102ms Dec 14 09:09:51.789: INFO: Pod "pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007084779s Dec 14 09:09:53.805: INFO: Pod "pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022741994s STEP: Saw pod success Dec 14 09:09:53.805: INFO: Pod "pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465" satisfied condition "Succeeded or Failed" Dec 14 09:09:53.808: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465 container projected-secret-volume-test: STEP: delete the pod Dec 14 09:09:53.822: INFO: Waiting for pod pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465 to disappear Dec 14 09:09:53.824: INFO: Pod pod-projected-secrets-5e1b2561-a504-456d-afaa-956691f2f465 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:53.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8801" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:53.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:09:53.212: INFO: The status of Pod busybox-host-aliasesa4abfc87-df88-4b89-a86c-d9ca3ef6833a is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:09:55.218: INFO: The status of Pod busybox-host-aliasesa4abfc87-df88-4b89-a86c-d9ca3ef6833a is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:55.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-186" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Dec 14 09:09:55.357: INFO: Waiting up to 5m0s for pod "downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0" in namespace "downward-api-6639" to be "Succeeded or Failed" Dec 14 09:09:55.361: INFO: Pod "downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517074ms Dec 14 09:09:57.366: INFO: Pod "downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008716785s STEP: Saw pod success Dec 14 09:09:57.366: INFO: Pod "downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0" satisfied condition "Succeeded or Failed" Dec 14 09:09:57.370: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0 container dapi-container: STEP: delete the pod Dec 14 09:09:57.386: INFO: Waiting for pod downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0 to disappear Dec 14 09:09:57.389: INFO: Pod downward-api-68c9bd03-c0db-4aa9-822e-4e0dc0b1eee0 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:57.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6639" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":877,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:57.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 14 09:09:57.467: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-741 2ce7e2e8-b69c-4e9a-a735-4e2532354180 13957113 0 2021-12-14 09:09:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:09:57.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-741 2ce7e2e8-b69c-4e9a-a735-4e2532354180 13957114 0 2021-12-14 09:09:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 14 09:09:57.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-741 2ce7e2e8-b69c-4e9a-a735-4e2532354180 13957115 0 2021-12-14 09:09:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:09:57.479: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-741 2ce7e2e8-b69c-4e9a-a735-4e2532354180 13957117 0 2021-12-14 09:09:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:57.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-741" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":45,"skipped":883,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:53.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:09:54.398: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 14 09:09:56.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069794, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069794, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069794, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069794, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:09:59.423: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:09:59.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6966" for this suite. STEP: Destroying namespace "webhook-6966-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":40,"skipped":784,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:57.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-f1d10208-ac38-435b-8d9f-d6c840c9e1ab STEP: Creating a pod to test consume configMaps Dec 14 09:09:57.570: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b" in namespace "projected-5273" to be "Succeeded or Failed" Dec 14 09:09:57.572: INFO: Pod "pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461062ms Dec 14 09:09:59.577: INFO: Pod "pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b": Phase="Running", Reason="", readiness=true. Elapsed: 2.006612118s Dec 14 09:10:01.582: INFO: Pod "pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011707863s STEP: Saw pod success Dec 14 09:10:01.582: INFO: Pod "pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b" satisfied condition "Succeeded or Failed" Dec 14 09:10:01.585: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b container agnhost-container: STEP: delete the pod Dec 14 09:10:01.604: INFO: Waiting for pod pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b to disappear Dec 14 09:10:01.606: INFO: Pod pod-projected-configmaps-eb49a8a9-5718-4436-8105-5c305bda935b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5273" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:59.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:03.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9605" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":41,"skipped":787,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:45.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Dec 14 09:09:45.578: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:09:49.353: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:04.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9900" for this suite. • [SLOW TEST:18.834 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":29,"skipped":414,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:01.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Dec 14 09:10:01.732: INFO: Waiting up to 5m0s for pod "downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28" in namespace "downward-api-9781" to be "Succeeded or Failed" Dec 14 09:10:01.735: INFO: Pod "downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700437ms Dec 14 09:10:03.740: INFO: Pod "downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28": Phase="Running", Reason="", readiness=true. Elapsed: 2.007540771s Dec 14 09:10:05.748: INFO: Pod "downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015917707s STEP: Saw pod success Dec 14 09:10:05.748: INFO: Pod "downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28" satisfied condition "Succeeded or Failed" Dec 14 09:10:05.755: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28 container dapi-container: STEP: delete the pod Dec 14 09:10:05.772: INFO: Waiting for pod downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28 to disappear Dec 14 09:10:05.776: INFO: Pod downward-api-4078f7fe-c25a-4b00-9c36-894aea388c28 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:05.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9781" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":922,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:03.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running Dec 14 09:10:05.241: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-1250" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":42,"skipped":794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:07.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 14 09:10:09.397: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:09.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8647" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:05.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Dec 14 09:10:05.865: INFO: The status of Pod labelsupdateb5dd4df1-09e3-4e05-a24c-b6b82a38ba78 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:07.870: INFO: The status of Pod labelsupdateb5dd4df1-09e3-4e05-a24c-b6b82a38ba78 is Running (Ready = true) Dec 14 09:10:08.394: INFO: Successfully updated pod "labelsupdateb5dd4df1-09e3-4e05-a24c-b6b82a38ba78" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:12.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2183" for this suite. • [SLOW TEST:6.603 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:09.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Dec 14 09:10:09.586: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:11.592: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:12.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5116" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":44,"skipped":880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:12.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-d0e4e32f-c79d-4952-b28e-046cdaacdc6d STEP: Creating a pod to test consume configMaps Dec 14 09:10:12.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2" in namespace "configmap-3078" to be "Succeeded or Failed" Dec 14 09:10:12.568: INFO: Pod "pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.24003ms Dec 14 09:10:14.572: INFO: Pod "pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007130175s STEP: Saw pod success Dec 14 09:10:14.572: INFO: Pod "pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2" satisfied condition "Succeeded or Failed" Dec 14 09:10:14.575: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2 container agnhost-container: STEP: delete the pod Dec 14 09:10:14.588: INFO: Waiting for pod pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2 to disappear Dec 14 09:10:14.590: INFO: Pod pod-configmaps-348f1084-b43f-43a7-89ad-1d155514ffa2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:14.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3078" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":975,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:14.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:10:14.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb" in namespace "projected-3298" to be "Succeeded or Failed" Dec 14 09:10:14.675: INFO: Pod "downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.851967ms Dec 14 09:10:16.680: INFO: Pod "downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0076311s STEP: Saw pod success Dec 14 09:10:16.680: INFO: Pod "downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb" satisfied condition "Succeeded or Failed" Dec 14 09:10:16.684: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb container client-container: STEP: delete the pod Dec 14 09:10:16.700: INFO: Waiting for pod downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb to disappear Dec 14 09:10:16.702: INFO: Pod downwardapi-volume-8ded757b-95e5-4683-8ccb-864b44e9a9eb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:16.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3298" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":988,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:04.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:21.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-932" for this suite. • [SLOW TEST:17.092 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":30,"skipped":423,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:21.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:10:22.443: INFO: Checking APIGroup: apiregistration.k8s.io Dec 14 09:10:22.445: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Dec 14 09:10:22.445: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Dec 14 09:10:22.445: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Dec 14 09:10:22.445: INFO: Checking APIGroup: apps Dec 14 09:10:22.447: INFO: PreferredVersion.GroupVersion: apps/v1 Dec 14 09:10:22.447: INFO: Versions found [{apps/v1 v1}] Dec 14 09:10:22.447: INFO: apps/v1 matches apps/v1 Dec 14 09:10:22.447: INFO: Checking APIGroup: events.k8s.io Dec 14 09:10:22.449: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Dec 14 09:10:22.449: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Dec 14 09:10:22.449: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Dec 14 09:10:22.449: INFO: Checking APIGroup: authentication.k8s.io Dec 14 09:10:22.451: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Dec 14 09:10:22.451: INFO: Versions found [{authentication.k8s.io/v1 v1}] Dec 14 09:10:22.451: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Dec 14 09:10:22.451: INFO: Checking APIGroup: authorization.k8s.io Dec 14 09:10:22.453: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Dec 14 09:10:22.453: INFO: Versions found [{authorization.k8s.io/v1 v1}] Dec 14 09:10:22.453: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Dec 14 09:10:22.453: INFO: Checking APIGroup: autoscaling Dec 14 09:10:22.455: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Dec 14 09:10:22.455: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Dec 14 09:10:22.455: INFO: autoscaling/v1 matches autoscaling/v1 Dec 14 09:10:22.455: INFO: Checking APIGroup: batch Dec 14 09:10:22.456: INFO: PreferredVersion.GroupVersion: batch/v1 Dec 14 09:10:22.456: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Dec 14 09:10:22.456: INFO: batch/v1 matches batch/v1 Dec 14 09:10:22.456: INFO: Checking APIGroup: certificates.k8s.io Dec 14 09:10:22.458: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Dec 14 09:10:22.458: INFO: Versions found [{certificates.k8s.io/v1 v1}] Dec 14 09:10:22.458: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Dec 14 09:10:22.458: INFO: Checking APIGroup: networking.k8s.io Dec 14 09:10:22.460: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Dec 14 09:10:22.460: INFO: Versions found [{networking.k8s.io/v1 v1}] Dec 14 09:10:22.460: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Dec 14 09:10:22.460: INFO: Checking APIGroup: policy Dec 14 09:10:22.461: INFO: PreferredVersion.GroupVersion: policy/v1 Dec 14 09:10:22.461: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Dec 14 09:10:22.462: INFO: policy/v1 matches policy/v1 Dec 14 09:10:22.462: INFO: Checking APIGroup: rbac.authorization.k8s.io Dec 14 09:10:22.463: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Dec 14 09:10:22.463: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Dec 14 09:10:22.463: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Dec 14 09:10:22.463: INFO: Checking APIGroup: storage.k8s.io Dec 14 09:10:22.465: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Dec 14 09:10:22.465: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Dec 14 09:10:22.465: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Dec 14 09:10:22.465: INFO: Checking APIGroup: admissionregistration.k8s.io Dec 14 09:10:22.467: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Dec 14 09:10:22.467: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Dec 14 09:10:22.467: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Dec 14 09:10:22.467: INFO: Checking APIGroup: apiextensions.k8s.io Dec 14 09:10:22.469: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Dec 14 09:10:22.469: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Dec 14 09:10:22.469: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Dec 14 09:10:22.469: INFO: Checking APIGroup: scheduling.k8s.io Dec 14 09:10:22.470: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Dec 14 09:10:22.470: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Dec 14 09:10:22.470: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Dec 14 09:10:22.470: INFO: Checking APIGroup: coordination.k8s.io Dec 14 09:10:22.472: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Dec 14 09:10:22.472: INFO: Versions found [{coordination.k8s.io/v1 v1}] Dec 14 09:10:22.472: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Dec 14 09:10:22.472: INFO: Checking APIGroup: node.k8s.io Dec 14 09:10:22.474: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Dec 14 09:10:22.474: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Dec 14 09:10:22.474: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Dec 14 09:10:22.474: INFO: Checking APIGroup: discovery.k8s.io Dec 14 09:10:22.476: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Dec 14 09:10:22.476: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Dec 14 09:10:22.476: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Dec 14 09:10:22.476: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Dec 14 09:10:22.478: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Dec 14 09:10:22.478: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Dec 14 09:10:22.478: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Dec 14 09:10:22.478: INFO: Checking APIGroup: litmuschaos.io Dec 14 09:10:22.480: INFO: PreferredVersion.GroupVersion: litmuschaos.io/v1alpha1 Dec 14 09:10:22.480: INFO: Versions found [{litmuschaos.io/v1alpha1 v1alpha1}] Dec 14 09:10:22.480: INFO: litmuschaos.io/v1alpha1 matches litmuschaos.io/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:22.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7715" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":31,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:33.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-ef574ebc-534c-488c-8804-1e85b9722c0e in namespace container-probe-2831 Dec 14 09:09:39.762: INFO: Started pod busybox-ef574ebc-534c-488c-8804-1e85b9722c0e in namespace container-probe-2831 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:09:39.766: INFO: Initial restart count of pod busybox-ef574ebc-534c-488c-8804-1e85b9722c0e is 0 Dec 14 09:10:25.959: INFO: Restart count of pod container-probe-2831/busybox-ef574ebc-534c-488c-8804-1e85b9722c0e is now 1 (46.192963488s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:25.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2831" for this suite. • [SLOW TEST:52.259 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":945,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:25.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added Dec 14 09:10:26.026: INFO: Found Service test-service-2l65n in namespace services-4781 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Dec 14 09:10:26.026: INFO: Service test-service-2l65n created STEP: Getting /status Dec 14 09:10:26.030: INFO: Service test-service-2l65n has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Dec 14 09:10:26.038: INFO: observed Service test-service-2l65n in namespace services-4781 with annotations: map[] & LoadBalancer: {[]} Dec 14 09:10:26.038: INFO: Found Service test-service-2l65n in namespace services-4781 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Dec 14 09:10:26.038: INFO: Service test-service-2l65n has service status patched STEP: updating the ServiceStatus Dec 14 09:10:26.046: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Dec 14 09:10:26.048: INFO: Observed Service test-service-2l65n in namespace services-4781 with annotations: map[] & Conditions: {[]} Dec 14 09:10:26.048: INFO: Observed event: &Service{ObjectMeta:{test-service-2l65n services-4781 cb68616b-c874-4bfb-8811-8b753051abbd 13957747 0 2021-12-14 09:10:26 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-12-14 09:10:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2021-12-14 09:10:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.136.229.41,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.136.229.41],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Dec 14 09:10:26.049: INFO: Found Service test-service-2l65n in namespace services-4781 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Dec 14 09:10:26.049: INFO: Service test-service-2l65n has service status updated STEP: patching the service STEP: watching for the Service to be patched Dec 14 09:10:26.057: INFO: observed Service test-service-2l65n in namespace services-4781 with labels: map[test-service-static:true] Dec 14 09:10:26.057: INFO: observed Service test-service-2l65n in namespace services-4781 with labels: map[test-service-static:true] Dec 14 09:10:26.057: INFO: observed Service test-service-2l65n in namespace services-4781 with labels: map[test-service-static:true] Dec 14 09:10:26.057: INFO: Found Service test-service-2l65n in namespace services-4781 with labels: map[test-service:patched test-service-static:true] Dec 14 09:10:26.057: INFO: Service test-service-2l65n patched STEP: deleting the service STEP: watching for the Service to be deleted Dec 14 09:10:26.069: INFO: Observed event: ADDED Dec 14 09:10:26.069: INFO: Observed event: MODIFIED Dec 14 09:10:26.069: INFO: Observed event: MODIFIED Dec 14 09:10:26.069: INFO: Observed event: MODIFIED Dec 14 09:10:26.069: INFO: Found Service test-service-2l65n in namespace services-4781 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Dec 14 09:10:26.070: INFO: Service test-service-2l65n deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:26.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4781" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:06:22.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-0ed5742d-b6a4-44d5-b55a-d05e2f68a058 in namespace container-probe-4439 Dec 14 09:06:28.452: INFO: Started pod test-webserver-0ed5742d-b6a4-44d5-b55a-d05e2f68a058 in namespace container-probe-4439 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:06:28.456: INFO: Initial restart count of pod test-webserver-0ed5742d-b6a4-44d5-b55a-d05e2f68a058 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:29.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4439" for this suite. • [SLOW TEST:246.699 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":49,"skipped":946,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:26.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Dec 14 09:10:26.533: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:10:29.557: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:10:29.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:32.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7126" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.760 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":50,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:32.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Dec 14 09:10:32.965: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4181 proxy --unix-socket=/tmp/kubectl-proxy-unix920634385/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:33.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4181" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":51,"skipped":971,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":310,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:29.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:10:29.948: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:10:32.972: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:33.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7613" for this suite. STEP: Destroying namespace "webhook-7613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •S ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":18,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:36.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 14 09:09:36.354: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13956577 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:09:36.355: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13956577 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 14 09:09:46.364: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13956731 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:09:46.364: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13956731 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 14 09:09:56.372: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13957083 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:09:56.372: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13957083 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 14 09:10:06.378: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13957362 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:10:06.378: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2753 d9bc3306-725a-4e77-93e3-41af3737d8bc 13957362 0 2021-12-14 09:09:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-12-14 09:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 14 09:10:16.389: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2753 99bf3a4f-1f1a-4e0b-81ba-348ff81ed53f 13957579 0 2021-12-14 09:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-12-14 09:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:10:16.390: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2753 99bf3a4f-1f1a-4e0b-81ba-348ff81ed53f 13957579 0 2021-12-14 09:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-12-14 09:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 14 09:10:26.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2753 99bf3a4f-1f1a-4e0b-81ba-348ff81ed53f 13957761 0 2021-12-14 09:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-12-14 09:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Dec 14 09:10:26.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2753 99bf3a4f-1f1a-4e0b-81ba-348ff81ed53f 13957761 0 2021-12-14 09:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-12-14 09:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:36.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2753" for this suite. • [SLOW TEST:60.092 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":38,"skipped":871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:12.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-3374 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 14 09:10:12.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Dec 14 09:10:12.773: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:14.778: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:16.777: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:18.780: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:20.780: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:22.777: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:24.779: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:26.780: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:28.779: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:30.782: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:32.778: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:34.779: INFO: The status of Pod netserver-0 is Running (Ready = true) Dec 14 09:10:34.786: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Dec 14 09:10:40.810: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Dec 14 09:10:40.810: INFO: Breadth first check of 192.168.1.111 on host 172.25.0.10... Dec 14 09:10:40.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.41:9080/dial?request=hostname&protocol=http&host=192.168.1.111&port=8083&tries=1'] Namespace:pod-network-test-3374 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:10:40.814: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:10:41.262: INFO: Waiting for responses: map[] Dec 14 09:10:41.262: INFO: reached 192.168.1.111 after 0/1 tries Dec 14 09:10:41.262: INFO: Breadth first check of 192.168.2.35 on host 172.25.0.9... Dec 14 09:10:41.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.41:9080/dial?request=hostname&protocol=http&host=192.168.2.35&port=8083&tries=1'] Namespace:pod-network-test-3374 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:10:41.267: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:10:41.388: INFO: Waiting for responses: map[] Dec 14 09:10:41.388: INFO: reached 192.168.2.35 after 0/1 tries Dec 14 09:10:41.388: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:41.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3374" for this suite. • [SLOW TEST:28.683 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:05.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Dec 14 09:10:05.895: INFO: Successfully updated pod "var-expansion-590cb6ea-cd00-4297-9a3d-0f7bd7f83080" STEP: waiting for pod running STEP: deleting the pod gracefully Dec 14 09:10:07.903: INFO: Deleting pod "var-expansion-590cb6ea-cd00-4297-9a3d-0f7bd7f83080" in namespace "var-expansion-3447" Dec 14 09:10:07.908: INFO: Wait up to 5m0s for pod "var-expansion-590cb6ea-cd00-4297-9a3d-0f7bd7f83080" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:41.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3447" for this suite. • [SLOW TEST:156.591 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":5,"skipped":181,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:41.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:10:41.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07" in namespace "projected-1288" to be "Succeeded or Failed" Dec 14 09:10:41.531: INFO: Pod "downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489344ms Dec 14 09:10:43.536: INFO: Pod "downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007980447s STEP: Saw pod success Dec 14 09:10:43.537: INFO: Pod "downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07" satisfied condition "Succeeded or Failed" Dec 14 09:10:43.541: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07 container client-container: STEP: delete the pod Dec 14 09:10:43.557: INFO: Waiting for pod downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07 to disappear Dec 14 09:10:43.562: INFO: Pod downwardapi-volume-5c62ed8d-0cdc-4864-89de-890d8455be07 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:43.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1288" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":951,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:16.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-2779 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 14 09:10:16.791: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Dec 14 09:10:16.817: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:18.822: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:20.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:22.821: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:24.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:26.835: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:28.822: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:30.822: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:32.821: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:34.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:36.821: INFO: The status of Pod netserver-0 is Running (Ready = false) Dec 14 09:10:38.823: INFO: The status of Pod netserver-0 is Running (Ready = true) Dec 14 09:10:38.831: INFO: The status of Pod netserver-1 is Running (Ready = false) Dec 14 09:10:40.835: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Dec 14 09:10:44.858: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Dec 14 09:10:44.859: INFO: Breadth first check of 192.168.1.113 on host 172.25.0.10... Dec 14 09:10:44.862: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.44:9080/dial?request=hostname&protocol=udp&host=192.168.1.113&port=8081&tries=1'] Namespace:pod-network-test-2779 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:10:44.862: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:10:45.050: INFO: Waiting for responses: map[] Dec 14 09:10:45.050: INFO: reached 192.168.1.113 after 0/1 tries Dec 14 09:10:45.050: INFO: Breadth first check of 192.168.2.36 on host 172.25.0.9... Dec 14 09:10:45.053: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.44:9080/dial?request=hostname&protocol=udp&host=192.168.2.36&port=8081&tries=1'] Namespace:pod-network-test-2779 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Dec 14 09:10:45.053: INFO: >>> kubeConfig: /root/.kube/config Dec 14 09:10:45.214: INFO: Waiting for responses: map[] Dec 14 09:10:45.214: INFO: reached 192.168.2.36 after 0/1 tries Dec 14 09:10:45.214: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:45.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2779" for this suite. • [SLOW TEST:28.480 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":1000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:45.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:45.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1587" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":52,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:43.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Dec 14 09:10:43.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18" in namespace "projected-2014" to be "Succeeded or Failed" Dec 14 09:10:43.634: INFO: Pod "downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005049ms Dec 14 09:10:45.638: INFO: Pod "downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007585971s STEP: Saw pod success Dec 14 09:10:45.638: INFO: Pod "downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18" satisfied condition "Succeeded or Failed" Dec 14 09:10:45.641: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18 container client-container: STEP: delete the pod Dec 14 09:10:45.658: INFO: Waiting for pod downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18 to disappear Dec 14 09:10:45.661: INFO: Pod downwardapi-volume-31a6b5e8-924b-494f-aa86-f65755ed9d18 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:45.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2014" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:45.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Dec 14 09:10:45.598: INFO: Waiting up to 5m0s for pod "client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559" in namespace "containers-1807" to be "Succeeded or Failed" Dec 14 09:10:45.601: INFO: Pod "client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576811ms Dec 14 09:10:47.607: INFO: Pod "client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008673762s STEP: Saw pod success Dec 14 09:10:47.607: INFO: Pod "client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559" satisfied condition "Succeeded or Failed" Dec 14 09:10:47.610: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559 container agnhost-container: STEP: delete the pod Dec 14 09:10:47.625: INFO: Waiting for pod client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559 to disappear Dec 14 09:10:47.628: INFO: Pod client-containers-09467acb-dc70-4ac3-bf6b-ebdbf4fd8559 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:47.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1807" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:45.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:10:46.133: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Dec 14 09:10:48.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 14 09:10:50.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63775069846, loc:(*time.Location)(0xa09bc80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:10:53.163: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:53.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-692" for this suite. STEP: Destroying namespace "webhook-692-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":48,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:33.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Dec 14 09:10:33.157: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:35.161: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:37.163: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:39.163: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:41.163: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:43.162: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:45.164: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:47.162: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:49.163: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:51.163: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:53.164: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = false) Dec 14 09:10:55.165: INFO: The status of Pod test-webserver-f7e9203a-54fb-49bd-ad69-307def40f203 is Running (Ready = true) Dec 14 09:10:55.168: INFO: Container started at 2021-12-14 09:10:34 +0000 UTC, pod became ready at 2021-12-14 09:10:53 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:55.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8055" for this suite. • [SLOW TEST:22.065 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":995,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:41.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-5481 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5481 to expose endpoints map[] Dec 14 09:10:41.999: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Dec 14 09:10:43.008: INFO: successfully validated that service multi-endpoint-test in namespace services-5481 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5481 Dec 14 09:10:43.018: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:45.023: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5481 to expose endpoints map[pod1:[100]] Dec 14 09:10:45.040: INFO: successfully validated that service multi-endpoint-test in namespace services-5481 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-5481 Dec 14 09:10:45.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:47.053: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5481 to expose endpoints map[pod1:[100] pod2:[101]] Dec 14 09:10:47.070: INFO: successfully validated that service multi-endpoint-test in namespace services-5481 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Checking if the Service forwards traffic to pods Dec 14 09:10:47.070: INFO: Creating new exec pod Dec 14 09:10:54.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5481 exec execpodbk84x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Dec 14 09:10:54.398: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Dec 14 09:10:54.398: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:54.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5481 exec execpodbk84x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.141.154.135 80' Dec 14 09:10:54.618: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.141.154.135 80\nConnection to 10.141.154.135 80 port [tcp/http] succeeded!\n" Dec 14 09:10:54.618: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:54.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5481 exec execpodbk84x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Dec 14 09:10:54.866: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Dec 14 09:10:54.866: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:54.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5481 exec execpodbk84x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.141.154.135 81' Dec 14 09:10:55.140: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.141.154.135 81\nConnection to 10.141.154.135 81 port [tcp/*] succeeded!\n" Dec 14 09:10:55.140: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" STEP: Deleting pod pod1 in namespace services-5481 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5481 to expose endpoints map[pod2:[101]] Dec 14 09:10:55.172: INFO: successfully validated that service multi-endpoint-test in namespace services-5481 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-5481 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5481 to expose endpoints map[] Dec 14 09:10:55.186: INFO: successfully validated that service multi-endpoint-test in namespace services-5481 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:55.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5481" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:13.259 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":6,"skipped":189,"failed":0} S ------------------------------ Dec 14 09:10:55.211: INFO: Running AfterSuite actions on all nodes Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:10:55.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:22.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-445 Dec 14 09:10:22.619: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Dec 14 09:10:24.625: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Dec 14 09:10:24.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Dec 14 09:10:24.904: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Dec 14 09:10:24.904: INFO: stdout: "iptables" Dec 14 09:10:24.904: INFO: proxyMode: iptables Dec 14 09:10:24.914: INFO: Waiting for pod kube-proxy-mode-detector to disappear Dec 14 09:10:24.919: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-445 STEP: creating replication controller affinity-nodeport-timeout in namespace services-445 I1214 09:10:24.937182 43 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-445, replica count: 3 I1214 09:10:27.989258 43 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 14 09:10:28.001: INFO: Creating new exec pod Dec 14 09:10:31.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Dec 14 09:10:31.290: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Dec 14 09:10:31.291: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:31.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.195.102 80' Dec 14 09:10:31.558: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.195.102 80\nConnection to 10.133.195.102 80 port [tcp/http] succeeded!\n" Dec 14 09:10:31.558: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:31.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.10 32005' Dec 14 09:10:31.827: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.10 32005\nConnection to 172.25.0.10 32005 port [tcp/*] succeeded!\n" Dec 14 09:10:31.827: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:31.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.25.0.9 32005' Dec 14 09:10:32.104: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.25.0.9 32005\nConnection to 172.25.0.9 32005 port [tcp/*] succeeded!\n" Dec 14 09:10:32.104: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Dec 14 09:10:32.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.25.0.10:32005/ ; done' Dec 14 09:10:32.494: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n" Dec 14 09:10:32.494: INFO: stdout: "\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw\naffinity-nodeport-timeout-fm6dw" Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.494: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.495: INFO: Received response from host: affinity-nodeport-timeout-fm6dw Dec 14 09:10:32.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.25.0.10:32005/' Dec 14 09:10:32.776: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n" Dec 14 09:10:32.776: INFO: stdout: "affinity-nodeport-timeout-fm6dw" Dec 14 09:10:52.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-445 exec execpod-affinitymb25b -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.25.0.10:32005/' Dec 14 09:10:53.079: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.25.0.10:32005/\n" Dec 14 09:10:53.079: INFO: stdout: "affinity-nodeport-timeout-zrjlq" Dec 14 09:10:53.079: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-445, will wait for the garbage collector to delete the pods Dec 14 09:10:53.149: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.413099ms Dec 14 09:10:53.249: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.253471ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:56.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-445" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 • [SLOW TEST:33.900 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":32,"skipped":457,"failed":0} Dec 14 09:10:56.476: INFO: Running AfterSuite actions on all nodes Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:10:56.476: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:53.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 14 09:10:53.327: INFO: Waiting up to 5m0s for pod "pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e" in namespace "emptydir-667" to be "Succeeded or Failed" Dec 14 09:10:53.330: INFO: Pod "pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409462ms Dec 14 09:10:55.335: INFO: Pod "pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008512618s Dec 14 09:10:57.340: INFO: Pod "pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013115254s STEP: Saw pod success Dec 14 09:10:57.340: INFO: Pod "pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e" satisfied condition "Succeeded or Failed" Dec 14 09:10:57.344: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-vkd62 pod pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e container test-container: STEP: delete the pod Dec 14 09:10:57.360: INFO: Waiting for pod pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e to disappear Dec 14 09:10:57.364: INFO: Pod pod-8bb0e623-921c-42f1-8c64-ad1165bfbe2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:57.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-667" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1007,"failed":0} Dec 14 09:10:57.374: INFO: Running AfterSuite actions on all nodes Dec 14 09:10:57.374: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:10:57.375: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:55.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-a571df6e-ec1d-472e-8dd7-d966a74f96c2 STEP: Creating a pod to test consume configMaps Dec 14 09:10:55.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62" in namespace "configmap-1452" to be "Succeeded or Failed" Dec 14 09:10:55.252: INFO: Pod "pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243634ms Dec 14 09:10:57.257: INFO: Pod "pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00894713s Dec 14 09:10:59.264: INFO: Pod "pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015665223s STEP: Saw pod success Dec 14 09:10:59.264: INFO: Pod "pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62" satisfied condition "Succeeded or Failed" Dec 14 09:10:59.268: INFO: Trying to get logs from node capi-v1.22-md-0-698f477975-c846h pod pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62 container agnhost-container: STEP: delete the pod Dec 14 09:10:59.283: INFO: Waiting for pod pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62 to disappear Dec 14 09:10:59.286: INFO: Pod pod-configmaps-5116d1e8-6eea-4232-81e2-be6ef9059b62 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:10:59.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1452" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1006,"failed":0} Dec 14 09:10:59.298: INFO: Running AfterSuite actions on all nodes Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:10:59.298: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:47.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication Dec 14 09:10:48.323: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 14 09:10:48.335: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 14 09:10:51.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:11:01.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7043" for this suite. STEP: Destroying namespace "webhook-7043-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.840 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":54,"skipped":1115,"failed":0} Dec 14 09:11:01.555: INFO: Running AfterSuite actions on all nodes Dec 14 09:11:01.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:11:01.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:11:01.556: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:11:01.556: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:11:01.556: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:11:01.556: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:11:01.556: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:05:51.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:11:01.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-4500" for this suite. • [SLOW TEST:310.065 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":24,"skipped":383,"failed":0} Dec 14 09:11:01.720: INFO: Running AfterSuite actions on all nodes Dec 14 09:11:01.720: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:11:01.721: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:36.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-4959 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-4959 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4959 Dec 14 09:10:36.546: INFO: Found 0 stateful pods, waiting for 1 Dec 14 09:10:46.550: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 14 09:10:46.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:10:46.792: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:10:46.792: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:10:46.793: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:10:46.797: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 14 09:10:56.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:10:56.803: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:10:56.819: INFO: POD NODE PHASE GRACE CONDITIONS Dec 14 09:10:56.819: INFO: ss-0 capi-v1.22-md-0-698f477975-vkd62 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC }] Dec 14 09:10:56.819: INFO: Dec 14 09:10:56.819: INFO: StatefulSet ss has not reached scale 3, at 1 Dec 14 09:10:57.824: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996761998s Dec 14 09:10:58.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991904573s Dec 14 09:10:59.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987074123s Dec 14 09:11:00.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981649978s Dec 14 09:11:01.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976138201s Dec 14 09:11:02.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971334005s Dec 14 09:11:03.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.965867685s Dec 14 09:11:04.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.960003836s Dec 14 09:11:05.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.414301ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4959 Dec 14 09:11:06.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:11:07.122: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:11:07.122: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:11:07.122: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:11:07.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:11:07.378: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 14 09:11:07.379: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:11:07.379: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:11:07.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:11:07.645: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 14 09:11:07.645: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:11:07.645: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Dec 14 09:11:07.650: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:11:07.650: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:11:07.650: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 14 09:11:07.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:11:07.942: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:11:07.942: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:11:07.942: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:11:07.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:11:08.188: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:11:08.189: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:11:08.189: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:11:08.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-4959 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:11:08.450: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:11:08.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:11:08.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:11:08.450: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:11:08.455: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Dec 14 09:11:18.466: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:11:18.466: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:11:18.466: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 14 09:11:18.482: INFO: POD NODE PHASE GRACE CONDITIONS Dec 14 09:11:18.482: INFO: ss-0 capi-v1.22-md-0-698f477975-vkd62 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC }] Dec 14 09:11:18.482: INFO: ss-1 capi-v1.22-md-0-698f477975-c846h Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC }] Dec 14 09:11:18.482: INFO: ss-2 capi-v1.22-md-0-698f477975-vkd62 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC }] Dec 14 09:11:18.482: INFO: Dec 14 09:11:18.482: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 14 09:11:19.487: INFO: POD NODE PHASE GRACE CONDITIONS Dec 14 09:11:19.487: INFO: ss-0 capi-v1.22-md-0-698f477975-vkd62 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:36 +0000 UTC }] Dec 14 09:11:19.487: INFO: ss-2 capi-v1.22-md-0-698f477975-vkd62 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:11:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-14 09:10:56 +0000 UTC }] Dec 14 09:11:19.487: INFO: Dec 14 09:11:19.487: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 14 09:11:20.493: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.989318366s Dec 14 09:11:21.499: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.983452905s Dec 14 09:11:22.505: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.977904031s Dec 14 09:11:23.512: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.971159206s Dec 14 09:11:24.518: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.964654772s Dec 14 09:11:25.523: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.95869132s Dec 14 09:11:26.534: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.953938177s Dec 14 09:11:27.539: INFO: Verifying statefulset ss doesn't scale past 0 for another 942.675915ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4959 Dec 14 09:11:28.544: INFO: Scaling statefulset ss to 0 Dec 14 09:11:28.556: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:11:28.559: INFO: Deleting all statefulset in ns statefulset-4959 Dec 14 09:11:28.563: INFO: Scaling statefulset ss to 0 Dec 14 09:11:28.574: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:11:28.577: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:11:28.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4959" for this suite. • [SLOW TEST:52.098 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":39,"skipped":913,"failed":0} Dec 14 09:11:28.603: INFO: Running AfterSuite actions on all nodes Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:11:28.603: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:10:33.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 STEP: Creating service test in namespace statefulset-2616 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Dec 14 09:10:33.218: INFO: Found 0 stateful pods, waiting for 3 Dec 14 09:10:43.225: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:10:43.225: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:10:43.225: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 14 09:10:43.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2616 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:10:43.452: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:10:43.452: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:10:43.452: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Dec 14 09:10:53.490: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 14 09:11:03.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2616 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:11:03.906: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:11:03.907: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:11:03.907: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Dec 14 09:11:13.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2616 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Dec 14 09:11:14.255: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Dec 14 09:11:14.255: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Dec 14 09:11:14.255: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Dec 14 09:11:24.296: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 14 09:11:34.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2616 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Dec 14 09:11:34.606: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Dec 14 09:11:34.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Dec 14 09:11:34.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Dec 14 09:11:44.635: INFO: Deleting all statefulset in ns statefulset-2616 Dec 14 09:11:44.639: INFO: Scaling statefulset ss2 to 0 Dec 14 09:11:54.667: INFO: Waiting for statefulset status.replicas updated to 0 Dec 14 09:11:54.671: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:11:54.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2616" for this suite. • [SLOW TEST:81.523 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":19,"skipped":349,"failed":0} Dec 14 09:11:54.701: INFO: Running AfterSuite actions on all nodes Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:11:54.701: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:08:03.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-3547cd4e-098c-478b-871c-ac758758f94f in namespace container-probe-9307 Dec 14 09:08:05.389: INFO: Started pod liveness-3547cd4e-098c-478b-871c-ac758758f94f in namespace container-probe-9307 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:08:05.392: INFO: Initial restart count of pod liveness-3547cd4e-098c-478b-871c-ac758758f94f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:12:06.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9307" for this suite. • [SLOW TEST:243.221 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":323,"failed":0} Dec 14 09:12:06.570: INFO: Running AfterSuite actions on all nodes Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:12:06.570: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Dec 14 09:09:51.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-61faddfb-300d-46bc-a1d3-c2947a0a94aa in namespace container-probe-1647 Dec 14 09:09:55.289: INFO: Started pod busybox-61faddfb-300d-46bc-a1d3-c2947a0a94aa in namespace container-probe-1647 STEP: checking the pod's current state and verifying that restartCount is present Dec 14 09:09:55.294: INFO: Initial restart count of pod busybox-61faddfb-300d-46bc-a1d3-c2947a0a94aa is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Dec 14 09:13:56.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1647" for this suite. • [SLOW TEST:244.774 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":364,"failed":0} Dec 14 09:13:56.018: INFO: Running AfterSuite actions on all nodes Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Dec 14 09:13:56.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Dec 14 09:13:56.019: INFO: Running AfterSuite actions on node 1 Dec 14 09:13:56.019: INFO: Skipping dumping logs from cluster Ran 325 of 6432 Specs in 671.854 seconds SUCCESS! -- 325 Passed | 0 Failed | 0 Pending | 6107 Skipped Ginkgo ran 1 suite in 11m14.811628581s Test Suite Passed