Running Suite: Kubernetes e2e suite =================================== Random Seed: 1623686857 - Will randomize all specs Will run 5668 specs Running in parallel across 10 nodes Jun 14 16:07:40.011: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.015: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 14 16:07:40.331: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 16:07:40.386: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 16:07:40.386: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 14 16:07:40.386: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 14 16:07:40.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 14 16:07:40.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 14 16:07:40.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 14 16:07:40.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 14 16:07:40.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 14 16:07:40.397: INFO: e2e test version: v1.20.7 Jun 14 16:07:40.398: INFO: kube-apiserver version: v1.20.7 Jun 14 16:07:40.399: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.405: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 14 16:07:40.400: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.425: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 14 16:07:40.401: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.429: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 14 16:07:40.442: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.726: INFO: Cluster IP family: ipv4 Jun 14 16:07:40.432: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.726: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 14 16:07:40.439: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.726: INFO: Cluster IP family: ipv4 Jun 14 16:07:40.408: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.726: INFO: Cluster IP family: ipv4 Jun 14 16:07:40.432: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.726: INFO: Cluster IP family: ipv4 Jun 14 16:07:40.404: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.727: INFO: Cluster IP family: ipv4 Jun 14 16:07:40.407: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:07:40.727: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Jun 14 16:07:41.050: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.064: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:07:41.067: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:41.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2864" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:07:41.077212 22 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 242 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc00016e0d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003d60760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004364160, 0xc003d60760, 0xc004364160, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003d60760, 0xa861a03ca6ca6, 0xc003d60788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0x99, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc003026f60, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000feeb40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000feeb40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0013ab018, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc003d616c8, 0xc001adc690, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001adc690, 0x0, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001adc690, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0012ea780, 0xc001adc690, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0012ea780, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0012ea780, 0xc002601af8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000178280, 0x7f2eea772b90, 0xc0015da900, 0x4df1e67, 0x14, 0xc0036a9f80, 0x3, 0x3, 0x55a4b80, 0xc00016c900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc0015da900, 0x4df1e67, 0x14, 0xc003fd0d00, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc0015da900, 0x4df1e67, 0x14, 0xc0024ef760, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0015da900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0015da900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0015da900, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.666 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:238 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ S ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test Jun 14 16:07:41.059: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.064: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:88 [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-980" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename localssd Jun 14 16:07:41.072: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.075: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:36 Jun 14 16:07:41.078: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:41.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "localssd-1472" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.215 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should write and read from node local SSD [Feature:GKELocalSSD] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:40 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:37 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling Jun 14 16:07:41.078: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.082: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:07:41.095: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:41.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-7883" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:07:41.107804 24 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 193 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00205a760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00452e0e0, 0xc00205a760, 0xc00452e0e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00205a760, 0xa861a059d64fe, 0xc00205a788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0x3a, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc00455e180, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0001aa8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0001aa8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0010e0fc0, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00205b6c8, 0xc002b463c0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002b463c0, 0x0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002b463c0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc004211cc0, 0xc002b463c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc004211cc0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc004211cc0, 0xc00406beb8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7faeac0c7fc0, 0xc002152c00, 0x4df1e67, 0x14, 0xc0019a7c80, 0x3, 0x3, 0x55a4b80, 0xc0001de8c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc002152c00, 0x4df1e67, 0x14, 0xc003821400, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc002152c00, 0x4df1e67, 0x14, 0xc0038b2260, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002152c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002152c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002152c00, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.056 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale up at all [Feature:ClusterAutoscalerScalability1] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:138 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:07:41.926: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:41.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9830" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:07:43.229539 24 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 193 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc0001aa078) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00205a760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0046077c0, 0xc00205a760, 0xc0046077c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00205a760, 0xa861a84131c34, 0xc00205a788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0xb8, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc004661830, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0001aa8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0001aa8a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc0010e0fc0, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc00205b6c8, 0xc002b464b0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002b464b0, 0x0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002b464b0, 0x54ea640, 0xc0001de8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc004211cc0, 0xc002b464b0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc004211cc0, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc004211cc0, 0xc00406beb8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001cc230, 0x7faeac0c7fc0, 0xc002152c00, 0x4df1e67, 0x14, 0xc0019a7c80, 0x3, 0x3, 0x55a4b80, 0xc0001de8c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc002152c00, 0x4df1e67, 0x14, 0xc003821400, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc002152c00, 0x4df1e67, 0x14, 0xc0038b2260, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002152c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002152c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002152c00, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [2.096 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale up twice [Feature:ClusterAutoscalerScalability2] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:161 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Jun 14 16:07:41.070: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.073: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 14 16:07:55.337: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:55.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7857" for this suite. • [SLOW TEST:14.559 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:171 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":1,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:55.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:07:55.836: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:55.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-2692" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:07:55.847374 21 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 277 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc00016e0d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003432760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000f029a0, 0xc003432760, 0xc000f029a0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003432760, 0xa861d7428ac60, 0xc003432788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0x67, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002e16b10, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000633f08, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0034336c8, 0xc003274960, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003274960, 0x0, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003274960, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0029d2000, 0xc003274960, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0029d2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0029d2000, 0xc0006a4480) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000178280, 0x7f453db2a268, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041c2a50, 0x3, 0x3, 0x55a4b80, 0xc00016c900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041fb800, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041e85e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0040bf080, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.406 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should scale down empty nodes [Feature:ClusterAutoscalerScalability3] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:210 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:55.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:07:55.925: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-8618" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:07:55.936235 21 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 277 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc00016e0d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003432760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc004052580, 0xc003432760, 0xc004052580, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003432760, 0xa861d7975415b, 0xc003432788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0x78, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc002e17bc0, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000633f08, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0034336c8, 0xc003274c30, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003274c30, 0x0, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003274c30, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0029d2000, 0xc003274c30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0029d2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0029d2000, 0xc0006a4480) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000178280, 0x7f453db2a268, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041c2a50, 0x3, 0x3, 0x55a4b80, 0xc00016c900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041fb800, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041e85e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0040bf080, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:335 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 Jun 14 16:07:42.930: INFO: Waiting up to 5m0s for pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c" in namespace "security-context-test-7764" to be "Succeeded or Failed" Jun 14 16:07:43.229: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 299.158141ms Jun 14 16:07:45.235: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305174688s Jun 14 16:07:47.427: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497810235s Jun 14 16:07:49.525: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.595799258s Jun 14 16:07:51.733: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803725053s Jun 14 16:07:53.933: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Running", Reason="", readiness=true. Elapsed: 11.002930506s Jun 14 16:07:55.937: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.007460758s Jun 14 16:07:55.937: INFO: Pod "busybox-user-0-1d83b698-3972-4132-a520-4706623b8f6c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:55.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7764" for this suite. • [SLOW TEST:14.646 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test Jun 14 16:07:41.073: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.076: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 Jun 14 16:07:41.096: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4814" to be "Succeeded or Failed" Jun 14 16:07:41.099: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 3.614604ms Jun 14 16:07:43.229: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133807444s Jun 14 16:07:45.235: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139097265s Jun 14 16:07:47.428: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.331943468s Jun 14 16:07:49.526: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42999523s Jun 14 16:07:51.733: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.637230432s Jun 14 16:07:53.737: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.641106002s Jun 14 16:07:55.831: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.735647407s Jun 14 16:07:55.831: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:56.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4814" for this suite. • [SLOW TEST:15.401 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":9,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Jun 14 16:07:41.228: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 STEP: creating secret and pod Jun 14 16:07:41.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4012 create -f -' Jun 14 16:07:49.842: INFO: stderr: "" Jun 14 16:07:49.842: INFO: stdout: "secret/test-secret created\n" Jun 14 16:07:49.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4012 create -f -' Jun 14 16:07:50.840: INFO: stderr: "" Jun 14 16:07:50.840: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Jun 14 16:07:56.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4012 logs secret-test-pod test-container' Jun 14 16:07:57.072: INFO: stderr: "" Jun 14 16:07:57.072: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:57.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4012" for this suite. • [SLOW TEST:15.974 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a pod that reads a secret _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:114 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":70,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl Jun 14 16:07:41.059: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.064: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:57.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9586" for this suite. • [SLOW TEST:16.261 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:108 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":1,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:43.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:07:58.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-541" for this suite. • [SLOW TEST:14.870 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:56.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:00.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5815" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":271,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:57.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Jun 14 16:07:57.674: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 STEP: creating the pod Jun 14 16:07:57.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4764 create -f -' Jun 14 16:07:58.064: INFO: stderr: "" Jun 14 16:07:58.064: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Jun 14 16:08:02.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4764 logs dapi-test-pod test-container' Jun 14 16:08:03.137: INFO: stderr: "" Jun 14 16:08:03.137: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-4764\nMY_POD_IP=10.244.1.97\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.7\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Jun 14 16:08:03.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-4764 logs dapi-test-pod test-container' Jun 14 16:08:03.364: INFO: stderr: "" Jun 14 16:08:03.364: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-4764\nMY_POD_IP=10.244.1.97\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.7\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:03.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4764" for this suite. • [SLOW TEST:5.756 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a pod that prints his name and namespace _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:134 ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace","total":-1,"completed":2,"skipped":381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:57.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 Jun 14 16:07:57.459: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96" in namespace "security-context-test-6728" to be "Succeeded or Failed" Jun 14 16:07:57.462: INFO: Pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875021ms Jun 14 16:07:59.466: INFO: Pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006362517s Jun 14 16:08:01.533: INFO: Pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074020971s Jun 14 16:08:03.831: INFO: Pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371460067s Jun 14 16:08:03.831: INFO: Pod "alpine-nnp-true-51ea8bd8-90c1-4f99-8808-7a4619f29a96" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:04.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6728" for this suite. • [SLOW TEST:7.749 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:04.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:10.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-607" for this suite. • [SLOW TEST:6.597 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":3,"skipped":781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:10.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:134 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:13.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9312" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:05.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:26.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8720" for this suite. • [SLOW TEST:21.561 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:778 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":4,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:27.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330 Jun 14 16:08:27.255: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-5d14051c-3af3-42e2-8b07-549eeff847a3" in namespace "security-context-test-5096" to be "Succeeded or Failed" Jun 14 16:08:27.259: INFO: Pod "alpine-nnp-nil-5d14051c-3af3-42e2-8b07-549eeff847a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30499ms Jun 14 16:08:29.330: INFO: Pod "alpine-nnp-nil-5d14051c-3af3-42e2-8b07-549eeff847a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074932513s Jun 14 16:08:31.334: INFO: Pod "alpine-nnp-nil-5d14051c-3af3-42e2-8b07-549eeff847a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078475345s Jun 14 16:08:31.334: INFO: Pod "alpine-nnp-nil-5d14051c-3af3-42e2-8b07-549eeff847a3" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:31.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5096" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:31.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 STEP: Creating a pod with a privileged container STEP: Executing in the privileged container Jun 14 16:08:42.729: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8231 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 16:08:42.729: INFO: >>> kubeConfig: /root/.kube/config Jun 14 16:08:42.881: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-8231 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 16:08:42.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Jun 14 16:08:43.065: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-8231 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 14 16:08:43.065: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:43.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-8231" for this suite. • [SLOW TEST:11.701 seconds] [k8s.io] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:49 ------------------------------ {"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":6,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Jun 14 16:07:41.112: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.115: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216 STEP: Creating pod busybox-0a3ecadd-8c73-4791-b6cd-1fa94bcac732 in namespace container-probe-5603 Jun 14 16:07:55.136: INFO: Started pod busybox-0a3ecadd-8c73-4791-b6cd-1fa94bcac732 in namespace container-probe-5603 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:07:55.225: INFO: Initial restart count of pod busybox-0a3ecadd-8c73-4791-b6cd-1fa94bcac732 is 0 Jun 14 16:08:50.727: INFO: Restart count of pod container-probe-5603/busybox-0a3ecadd-8c73-4791-b6cd-1fa94bcac732 is now 1 (55.50229719s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:51.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5603" for this suite. • [SLOW TEST:70.150 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a docker exec liveness probe with timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:216 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout ","total":-1,"completed":1,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:51.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:08:55.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7397" for this suite. • [SLOW TEST:5.335 seconds] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:68 ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":2,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:56.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277 Jun 14 16:08:58.528: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-8e4d2def-d716-47f2-a361-eb35d6a2ddb0" in namespace "security-context-test-5088" to be "Succeeded or Failed" Jun 14 16:08:59.326: INFO: Pod "busybox-privileged-true-8e4d2def-d716-47f2-a361-eb35d6a2ddb0": Phase="Pending", Reason="", readiness=false. Elapsed: 706.560459ms Jun 14 16:09:01.330: INFO: Pod "busybox-privileged-true-8e4d2def-d716-47f2-a361-eb35d6a2ddb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.710581782s Jun 14 16:09:01.330: INFO: Pod "busybox-privileged-true-8e4d2def-d716-47f2-a361-eb35d6a2ddb0" satisfied condition "Succeeded or Failed" Jun 14 16:09:01.337: INFO: Got logs for pod "busybox-privileged-true-8e4d2def-d716-47f2-a361-eb35d6a2ddb0": "" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:01.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5088" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":3,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:56.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:250 STEP: Creating pod busybox-371ac873-fafb-44a2-ba0f-2ea5a6d76d69 in namespace container-probe-452 Jun 14 16:08:00.433: INFO: Started pod busybox-371ac873-fafb-44a2-ba0f-2ea5a6d76d69 in namespace container-probe-452 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:08:00.437: INFO: Initial restart count of pod busybox-371ac873-fafb-44a2-ba0f-2ea5a6d76d69 is 0 Jun 14 16:09:01.828: INFO: Restart count of pod container-probe-452/busybox-371ac873-fafb-44a2-ba0f-2ea5a6d76d69 is now 1 (1m1.390782987s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:01.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-452" for this suite. • [SLOW TEST:65.659 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:250 ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:56.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:367 STEP: Creating pod startup-142ce376-4d56-46df-bc6a-0c92bb8d8aa4 in namespace container-probe-675 Jun 14 16:07:58.210: INFO: Started pod startup-142ce376-4d56-46df-bc6a-0c92bb8d8aa4 in namespace container-probe-675 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:07:58.213: INFO: Initial restart count of pod startup-142ce376-4d56-46df-bc6a-0c92bb8d8aa4 is 0 Jun 14 16:09:01.828: INFO: Restart count of pod container-probe-675/startup-142ce376-4d56-46df-bc6a-0c92bb8d8aa4 is now 1 (1m3.615147419s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:01.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-675" for this suite. • [SLOW TEST:65.685 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:367 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":265,"failed":0} SS ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:01.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename autoscaling STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:71 Jun 14 16:09:01.970: INFO: Only supported for providers [gce gke kubemark] (not skeleton) [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:01.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "autoscaling-9685" for this suite. [AfterEach] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:115 STEP: Restoring initial size of the cluster E0614 16:09:01.979959 21 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 277 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x449aa40, 0x77a5860) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x449aa40, 0x77a5860) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes.func1(0x2c, 0x2b, 0xc00016e0d8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:179 +0x52 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003432760, 0xcaf500, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00456b4c0, 0xc003432760, 0xc00456b4c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc003432760, 0xa862cd9f83e5d, 0xc003432788) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/node.waitListSchedulableNodes(0x0, 0x0, 0x7977f00, 0x75, 0x4f9357) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:178 +0xa5 k8s.io/kubernetes/test/e2e/framework/node.CheckReady(0x0, 0x0, 0x0, 0x1176592e000, 0x0, 0xc0041f5da0, 0x25, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:150 +0xb9 k8s.io/kubernetes/test/e2e/framework/node.WaitForReadyNodes(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/node/wait.go:44 k8s.io/kubernetes/test/e2e/autoscaling.glob..func2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:118 +0x105 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001a30300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run(0xc000633f08, 0x54ea640, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:15 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample.func1(0xc0034336c8, 0xc003274b40, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:180 +0x3cd k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc003274b40, 0x0, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:197 +0x3a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc003274b40, 0x54ea640, 0xc00016c900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0029d2000, 0xc003274b40, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0029d2000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0029d2000, 0xc0006a4480) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000178280, 0x7f453db2a268, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041c2a50, 0x3, 0x3, 0x55a4b80, 0xc00016c900, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041fb800, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x54ef280, 0xc0040bf080, 0x4df1e67, 0x14, 0xc0041e85e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0040bf080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0040bf080, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [k8s.io] Cluster size autoscaler scalability [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:297 Only supported for providers [gce gke kubemark] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling/cluster_autoscaler_scalability.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:02.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-pools STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] GKE node pools [Feature:GKENodePool] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:34 Jun 14 16:09:02.096: INFO: Only supported for providers [gke] (not skeleton) [AfterEach] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-pools-7393" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [k8s.io] GKE node pools [Feature:GKENodePool] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should create a cluster with multiple node pools [Feature:GKENodePool] [BeforeEach] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:38 Only supported for providers [gke] (not skeleton) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/gke_node_pools.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:01.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should not be ready until startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:396 Jun 14 16:08:03.342: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Pending, waiting for it to be Running (with Ready = true) Jun 14 16:08:05.645: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Pending, waiting for it to be Running (with Ready = true) Jun 14 16:08:07.629: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Pending, waiting for it to be Running (with Ready = true) Jun 14 16:08:09.625: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:11.522: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:13.421: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:15.527: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:17.831: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:19.345: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:21.923: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:23.347: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:25.833: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:27.823: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:29.346: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:31.436: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:34.226: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:35.823: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:37.347: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:39.925: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:41.523: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:43.923: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:45.922: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:47.427: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:49.423: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:51.346: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:53.531: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:55.347: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:57.622: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:08:59.346: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:09:01.346: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = false) Jun 14 16:09:03.346: INFO: The status of Pod startup-78e9f3fb-d2c3-413d-95df-324e8289de75 is Running (Ready = true) Jun 14 16:09:03.349: INFO: Container started at 2021-06-14 16:08:08 +0000 UTC, pod became ready at 2021-06-14 16:09:01 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:03.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3189" for this suite. • [SLOW TEST:62.595 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not be ready until startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:396 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should not be ready until startupProbe succeeds","total":-1,"completed":3,"skipped":276,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:01.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:03.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-7726" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":4,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples Jun 14 16:07:41.066: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.070: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [Feature:Example] _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:50 Jun 14 16:07:41.080: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 Jun 14 16:07:41.186: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-958 create -f -' Jun 14 16:07:49.043: INFO: stderr: "" Jun 14 16:07:49.043: INFO: stdout: "pod/liveness-exec created\n" Jun 14 16:07:49.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=examples-958 create -f -' Jun 14 16:07:50.159: INFO: stderr: "" Jun 14 16:07:50.159: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Jun 14 16:07:54.528: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:07:56.531: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:07:58.430: INFO: Pod: liveness-http, restart count:0 Jun 14 16:07:58.535: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:00.434: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:00.634: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:02.635: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:02.637: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:05.026: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:05.026: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:07.033: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:07.033: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:09.136: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:09.136: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:11.228: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:11.228: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:13.422: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:13.422: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:15.527: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:15.527: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:17.830: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:17.830: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:19.835: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:19.835: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:21.923: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:21.923: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:24.125: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:24.125: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:26.130: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:26.130: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:28.330: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:28.330: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:30.334: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:30.334: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:32.422: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:32.423: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:34.532: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:34.532: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:36.939: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:36.941: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:39.124: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:39.125: INFO: Pod: liveness-http, restart count:0 Jun 14 16:08:41.523: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:41.523: INFO: Pod: liveness-http, restart count:1 Jun 14 16:08:41.523: INFO: Saw liveness-http restart, succeeded... Jun 14 16:08:43.923: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:46.333: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:48.724: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:50.728: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:52.930: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:54.933: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:57.622: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:08:59.824: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:09:01.828: INFO: Pod: liveness-exec, restart count:0 Jun 14 16:09:03.832: INFO: Pod: liveness-exec, restart count:1 Jun 14 16:09:03.832: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:03.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-958" for this suite. • [SLOW TEST:82.970 seconds] [k8s.io] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 [k8s.io] Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 liveness pods should be automatically restarted _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/examples.go:67 ------------------------------ SS ------------------------------ {"msg":"PASSED [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:03.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:03.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6520" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":5,"skipped":637,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35 [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:03.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:04.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2166" for this suite. •SS ------------------------------ {"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":2,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:04.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:154 [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:06.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9399" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":3,"skipped":484,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:02.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212 Jun 14 16:09:02.543: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-75f87c35-65cb-447c-94bf-0f8e1bfa2f63" in namespace "security-context-test-9051" to be "Succeeded or Failed" Jun 14 16:09:02.546: INFO: Pod "busybox-readonly-true-75f87c35-65cb-447c-94bf-0f8e1bfa2f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882926ms Jun 14 16:09:04.550: INFO: Pod "busybox-readonly-true-75f87c35-65cb-447c-94bf-0f8e1bfa2f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007071554s Jun 14 16:09:06.553: INFO: Pod "busybox-readonly-true-75f87c35-65cb-447c-94bf-0f8e1bfa2f63": Phase="Failed", Reason="", readiness=false. Elapsed: 4.010124765s Jun 14 16:09:06.553: INFO: Pod "busybox-readonly-true-75f87c35-65cb-447c-94bf-0f8e1bfa2f63" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:06.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9051" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:04.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146 Jun 14 16:09:04.368: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-1554" to be "Succeeded or Failed" Jun 14 16:09:04.371: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400043ms Jun 14 16:09:06.530: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161840667s Jun 14 16:09:08.636: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.26780979s Jun 14 16:09:08.636: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:08.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1554" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":4,"skipped":537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:06.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:10.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9572" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":571,"failed":0} Jun 14 16:09:10.153: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:58.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 STEP: wait until node is ready Jun 14 16:07:58.825: INFO: Waiting up to 5m0s for node leguer-worker2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Jun 14 16:07:59.931: INFO: node status heartbeat is unchanged for 1.095198836s, waiting for 1m20s Jun 14 16:08:00.926: INFO: node status heartbeat is unchanged for 2.090530147s, waiting for 1m20s Jun 14 16:08:02.324: INFO: node status heartbeat is unchanged for 3.487751104s, waiting for 1m20s Jun 14 16:08:03.125: INFO: node status heartbeat is unchanged for 4.289092095s, waiting for 1m20s Jun 14 16:08:04.132: INFO: node status heartbeat is unchanged for 5.295923695s, waiting for 1m20s Jun 14 16:08:05.026: INFO: node status heartbeat is unchanged for 6.190574371s, waiting for 1m20s Jun 14 16:08:06.126: INFO: node status heartbeat is unchanged for 7.290431984s, waiting for 1m20s Jun 14 16:08:07.034: INFO: node status heartbeat is unchanged for 8.197767765s, waiting for 1m20s Jun 14 16:08:08.124: INFO: node status heartbeat is unchanged for 9.288150506s, waiting for 1m20s Jun 14 16:08:08.926: INFO: node status heartbeat is unchanged for 10.090304876s, waiting for 1m20s Jun 14 16:08:10.326: INFO: node status heartbeat is unchanged for 11.490292184s, waiting for 1m20s Jun 14 16:08:10.935: INFO: node status heartbeat is unchanged for 12.098729113s, waiting for 1m20s Jun 14 16:08:11.840: INFO: node status heartbeat is unchanged for 13.0045998s, waiting for 1m20s Jun 14 16:08:13.228: INFO: node status heartbeat is unchanged for 14.392217828s, waiting for 1m20s Jun 14 16:08:13.937: INFO: node status heartbeat is unchanged for 15.100937004s, waiting for 1m20s Jun 14 16:08:15.135: INFO: node status heartbeat is unchanged for 16.298647975s, waiting for 1m20s Jun 14 16:08:15.840: INFO: node status heartbeat is unchanged for 17.004329487s, waiting for 1m20s Jun 14 16:08:17.138: INFO: node status heartbeat is unchanged for 18.302016111s, waiting for 1m20s Jun 14 16:08:18.326: INFO: node status heartbeat is unchanged for 19.489903365s, waiting for 1m20s Jun 14 16:08:19.224: INFO: node status heartbeat is unchanged for 20.388353532s, waiting for 1m20s Jun 14 16:08:19.840: INFO: node status heartbeat is unchanged for 21.003920623s, waiting for 1m20s Jun 14 16:08:20.840: INFO: node status heartbeat is unchanged for 22.004312646s, waiting for 1m20s Jun 14 16:08:21.923: INFO: node status heartbeat is unchanged for 23.087423622s, waiting for 1m20s Jun 14 16:08:22.841: INFO: node status heartbeat is unchanged for 24.005105527s, waiting for 1m20s Jun 14 16:08:24.126: INFO: node status heartbeat is unchanged for 25.290310371s, waiting for 1m20s Jun 14 16:08:24.923: INFO: node status heartbeat is unchanged for 26.08757591s, waiting for 1m20s Jun 14 16:08:25.840: INFO: node status heartbeat is unchanged for 27.0043916s, waiting for 1m20s Jun 14 16:08:26.840: INFO: node status heartbeat is unchanged for 28.004606834s, waiting for 1m20s Jun 14 16:08:28.330: INFO: node status heartbeat is unchanged for 29.494156637s, waiting for 1m20s Jun 14 16:08:28.841: INFO: node status heartbeat is unchanged for 30.00527971s, waiting for 1m20s Jun 14 16:08:30.022: INFO: node status heartbeat is unchanged for 31.186314977s, waiting for 1m20s Jun 14 16:08:30.934: INFO: node status heartbeat is unchanged for 32.098328047s, waiting for 1m20s Jun 14 16:08:32.423: INFO: node status heartbeat is unchanged for 33.587549636s, waiting for 1m20s Jun 14 16:08:34.226: INFO: node status heartbeat is unchanged for 35.389923779s, waiting for 1m20s Jun 14 16:08:35.824: INFO: node status heartbeat is unchanged for 36.987743471s, waiting for 1m20s Jun 14 16:08:36.941: INFO: node status heartbeat is unchanged for 38.105303503s, waiting for 1m20s Jun 14 16:08:38.425: INFO: node status heartbeat is unchanged for 39.588833246s, waiting for 1m20s Jun 14 16:08:39.125: INFO: node status heartbeat is unchanged for 40.288968401s, waiting for 1m20s Jun 14 16:08:39.925: INFO: node status heartbeat is unchanged for 41.089553236s, waiting for 1m20s Jun 14 16:08:41.524: INFO: node status heartbeat is unchanged for 42.687759626s, waiting for 1m20s Jun 14 16:08:41.931: INFO: node status heartbeat is unchanged for 43.095477275s, waiting for 1m20s Jun 14 16:08:42.840: INFO: node status heartbeat is unchanged for 44.003891929s, waiting for 1m20s Jun 14 16:08:43.923: INFO: node status heartbeat is unchanged for 45.086953644s, waiting for 1m20s Jun 14 16:08:45.123: INFO: node status heartbeat is unchanged for 46.287387806s, waiting for 1m20s Jun 14 16:08:46.334: INFO: node status heartbeat is unchanged for 47.497752087s, waiting for 1m20s Jun 14 16:08:47.427: INFO: node status heartbeat is unchanged for 48.591568021s, waiting for 1m20s Jun 14 16:08:48.225: INFO: node status heartbeat is unchanged for 49.389210672s, waiting for 1m20s Jun 14 16:08:48.939: INFO: node status heartbeat is unchanged for 50.1030047s, waiting for 1m20s Jun 14 16:08:49.841: INFO: node status heartbeat is unchanged for 51.00499047s, waiting for 1m20s Jun 14 16:08:51.130: INFO: node status heartbeat is unchanged for 52.293992257s, waiting for 1m20s Jun 14 16:08:51.840: INFO: node status heartbeat is unchanged for 53.004623502s, waiting for 1m20s Jun 14 16:08:52.930: INFO: node status heartbeat is unchanged for 54.094182097s, waiting for 1m20s Jun 14 16:08:53.842: INFO: node status heartbeat is unchanged for 55.005810332s, waiting for 1m20s Jun 14 16:08:54.840: INFO: node status heartbeat is unchanged for 56.004275627s, waiting for 1m20s Jun 14 16:08:56.031: INFO: node status heartbeat is unchanged for 57.194879424s, waiting for 1m20s Jun 14 16:08:56.929: INFO: node status heartbeat is unchanged for 58.093255757s, waiting for 1m20s Jun 14 16:08:58.025: INFO: node status heartbeat is unchanged for 59.1893125s, waiting for 1m20s Jun 14 16:08:59.326: INFO: node status heartbeat is unchanged for 1m0.490587755s, waiting for 1m20s Jun 14 16:08:59.940: INFO: node status heartbeat is unchanged for 1m1.104553708s, waiting for 1m20s Jun 14 16:09:00.841: INFO: node status heartbeat is unchanged for 1m2.004747378s, waiting for 1m20s Jun 14 16:09:01.839: INFO: node status heartbeat is unchanged for 1m3.003502386s, waiting for 1m20s Jun 14 16:09:02.840: INFO: node status heartbeat is unchanged for 1m4.004591246s, waiting for 1m20s Jun 14 16:09:03.840: INFO: node status heartbeat is unchanged for 1m5.003999286s, waiting for 1m20s Jun 14 16:09:04.840: INFO: node status heartbeat is unchanged for 1m6.004551235s, waiting for 1m20s Jun 14 16:09:06.224: INFO: node status heartbeat is unchanged for 1m7.388006062s, waiting for 1m20s Jun 14 16:09:07.031: INFO: node status heartbeat is unchanged for 1m8.195610439s, waiting for 1m20s Jun 14 16:09:07.841: INFO: node status heartbeat is unchanged for 1m9.004636901s, waiting for 1m20s Jun 14 16:09:08.938: INFO: node status heartbeat is unchanged for 1m10.101778166s, waiting for 1m20s Jun 14 16:09:09.841: INFO: node status heartbeat is unchanged for 1m11.004855306s, waiting for 1m20s Jun 14 16:09:11.025: INFO: node status heartbeat is unchanged for 1m12.189252116s, waiting for 1m20s Jun 14 16:09:11.933: INFO: node status heartbeat is unchanged for 1m13.09732994s, waiting for 1m20s Jun 14 16:09:12.928: INFO: node status heartbeat is unchanged for 1m14.092114591s, waiting for 1m20s Jun 14 16:09:13.840: INFO: node status heartbeat is unchanged for 1m15.003958556s, waiting for 1m20s Jun 14 16:09:14.845: INFO: node status heartbeat is unchanged for 1m16.009574156s, waiting for 1m20s Jun 14 16:09:15.931: INFO: node status heartbeat is unchanged for 1m17.094745132s, waiting for 1m20s Jun 14 16:09:17.131: INFO: node status heartbeat is unchanged for 1m18.294949796s, waiting for 1m20s Jun 14 16:09:17.932: INFO: node status heartbeat is unchanged for 1m19.095935171s, waiting for 1m20s Jun 14 16:09:18.840: INFO: node status heartbeat is unchanged for 1m20.004587789s, was waiting for at least 1m20s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7384" for this suite. • [SLOW TEST:80.344 seconds] [k8s.io] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112 ------------------------------ {"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":356,"failed":0} Jun 14 16:09:18.859: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:14.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should not be ready with a docker exec readiness probe timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233 STEP: Creating pod busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 in namespace container-probe-3599 Jun 14 16:08:23.824: INFO: Started pod busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 in namespace container-probe-3599 Jun 14 16:08:23.824: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (1.293µs elapsed) Jun 14 16:08:25.824: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (2.000203685s elapsed) Jun 14 16:08:27.824: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (4.000470728s elapsed) Jun 14 16:08:29.825: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (6.000761169s elapsed) Jun 14 16:08:31.825: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (8.001066186s elapsed) Jun 14 16:08:33.921: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (10.097607629s elapsed) Jun 14 16:08:35.922: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (12.09793704s elapsed) Jun 14 16:08:37.922: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (14.098197107s elapsed) Jun 14 16:08:39.922: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (16.09851272s elapsed) Jun 14 16:08:41.923: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (18.098804974s elapsed) Jun 14 16:08:43.923: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (20.099141896s elapsed) Jun 14 16:08:45.923: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (22.099419754s elapsed) Jun 14 16:08:47.923: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (24.099715144s elapsed) Jun 14 16:08:49.924: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (26.100004635s elapsed) Jun 14 16:08:51.924: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (28.100322402s elapsed) Jun 14 16:08:53.924: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (30.10064456s elapsed) Jun 14 16:08:55.925: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (32.100996696s elapsed) Jun 14 16:08:57.925: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (34.101333941s elapsed) Jun 14 16:08:59.925: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (36.101644839s elapsed) Jun 14 16:09:01.926: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (38.101848616s elapsed) Jun 14 16:09:03.926: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (40.102068989s elapsed) Jun 14 16:09:05.926: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (42.102329651s elapsed) Jun 14 16:09:07.926: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (44.102636767s elapsed) Jun 14 16:09:09.927: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (46.10294705s elapsed) Jun 14 16:09:11.927: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (48.103306173s elapsed) Jun 14 16:09:13.927: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (50.103516742s elapsed) Jun 14 16:09:15.928: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (52.103829972s elapsed) Jun 14 16:09:17.928: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (54.104106556s elapsed) Jun 14 16:09:19.928: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (56.10440055s elapsed) Jun 14 16:09:21.928: INFO: pod container-probe-3599/busybox-94215efd-774f-42d8-b087-86cb3c2b8b13 is not ready (58.104727271s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:24.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3599" for this suite. • [SLOW TEST:70.414 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not be ready with a docker exec readiness probe timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:233 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should not be ready with a docker exec readiness probe timeout ","total":-1,"completed":5,"skipped":969,"failed":0} Jun 14 16:09:24.540: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:07.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:270 STEP: Creating pod liveness-fe3ac850-e860-48de-988f-79c2c2dc4865 in namespace container-probe-7655 Jun 14 16:09:11.530: INFO: Started pod liveness-fe3ac850-e860-48de-988f-79c2c2dc4865 in namespace container-probe-7655 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:09:11.534: INFO: Initial restart count of pod liveness-fe3ac850-e860-48de-988f-79c2c2dc4865 is 0 Jun 14 16:09:35.931: INFO: Restart count of pod container-probe-7655/liveness-fe3ac850-e860-48de-988f-79c2c2dc4865 is now 1 (24.394519324s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:09:36.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7655" for this suite. • [SLOW TEST:29.407 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:270 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":874,"failed":0} Jun 14 16:09:36.639: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:08:43.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:309 STEP: Creating pod startup-aa92def4-4dc4-4946-90af-2891d221df8f in namespace container-probe-4344 Jun 14 16:08:50.630: INFO: Started pod startup-aa92def4-4dc4-4946-90af-2891d221df8f in namespace container-probe-4344 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:08:50.637: INFO: Initial restart count of pod startup-aa92def4-4dc4-4946-90af-2891d221df8f is 0 Jun 14 16:10:42.849: INFO: Restart count of pod container-probe-4344/startup-aa92def4-4dc4-4946-90af-2891d221df8f is now 1 (1m52.212390418s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:10:43.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4344" for this suite. • [SLOW TEST:119.748 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:309 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted startup probe fails","total":-1,"completed":7,"skipped":515,"failed":0} Jun 14 16:10:43.135: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:02.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:338 STEP: Creating pod startup-13d48025-4c46-4a88-aa49-f0f3a925c6b6 in namespace container-probe-4916 Jun 14 16:09:05.224: INFO: Started pod startup-13d48025-4c46-4a88-aa49-f0f3a925c6b6 in namespace container-probe-4916 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:09:05.228: INFO: Initial restart count of pod startup-13d48025-4c46-4a88-aa49-f0f3a925c6b6 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:13:12.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4916" for this suite. • [SLOW TEST:252.019 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:338 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":3,"skipped":611,"failed":0} Jun 14 16:13:14.930: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:09:04.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:285 STEP: Creating pod liveness-b497147b-61bd-4c01-9dd4-7c7bd92dd6e8 in namespace container-probe-7753 Jun 14 16:09:08.433: INFO: Started pod liveness-b497147b-61bd-4c01-9dd4-7c7bd92dd6e8 in namespace container-probe-7753 STEP: checking the pod's current state and verifying that restartCount is present Jun 14 16:09:08.437: INFO: Initial restart count of pod liveness-b497147b-61bd-4c01-9dd4-7c7bd92dd6e8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:13:14.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7753" for this suite. • [SLOW TEST:253.089 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:285 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":6,"skipped":677,"failed":0} Jun 14 16:13:17.322: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:41.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:682 STEP: getting restart delay-0 Jun 14 16:09:38.538: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-06-14 16:08:56 +0000 UTC restartedAt=2021-06-14 16:09:37 +0000 UTC (41s) STEP: getting restart delay-1 Jun 14 16:11:04.141: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-06-14 16:09:42 +0000 UTC restartedAt=2021-06-14 16:11:03 +0000 UTC (1m21s) STEP: getting restart delay-2 Jun 14 16:14:00.926: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-06-14 16:11:08 +0000 UTC restartedAt=2021-06-14 16:14:00 +0000 UTC (2m52s) STEP: updating the image Jun 14 16:14:01.438: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Jun 14 16:14:27.825: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-06-14 16:14:12 +0000 UTC restartedAt=2021-06-14 16:14:26 +0000 UTC (14s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:14:27.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3800" for this suite. • [SLOW TEST:406.256 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:682 ------------------------------ {"msg":"PASSED [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":1,"skipped":413,"failed":0} Jun 14 16:14:27.838: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 16:07:40.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Jun 14 16:07:41.066: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 16:07:41.069: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:723 STEP: getting restart delay when capped Jun 14 16:19:15.847: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-06-14 16:14:05 +0000 UTC restartedAt=2021-06-14 16:19:14 +0000 UTC (5m9s) Jun 14 16:24:27.329: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-06-14 16:19:19 +0000 UTC restartedAt=2021-06-14 16:24:26 +0000 UTC (5m7s) Jun 14 16:29:39.824: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-06-14 16:24:31 +0000 UTC restartedAt=2021-06-14 16:29:39 +0000 UTC (5m8s) STEP: getting restart delay after a capped delay Jun 14 16:34:53.963: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-06-14 16:29:44 +0000 UTC restartedAt=2021-06-14 16:34:52 +0000 UTC (5m8s) [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 16:34:53.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2647" for this suite. • [SLOW TEST:1633.144 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:723 ------------------------------ {"msg":"PASSED [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":59,"failed":0} Jun 14 16:34:53.979: INFO: Running AfterSuite actions on all nodes Jun 14 16:09:08.744: INFO: Running AfterSuite actions on all nodes Jun 14 16:34:54.032: INFO: Running AfterSuite actions on node 1 Jun 14 16:34:54.033: INFO: Skipping dumping logs from cluster Ran 37 of 5668 Specs in 1634.300 seconds SUCCESS! -- 37 Passed | 0 Failed | 0 Pending | 5631 Skipped Ginkgo ran 1 suite in 27m16.065605302s Test Suite Passed